Ensuring High Productivity Even With Distributed Engineering Teams

The traditional workspace has been witnessing an overhaul. From traditional cubicles to the open office concept to the standing workspace… new trends to increase employee productivity arise every day. One such concept, a fundamentally different way of working when it arrived, has now cemented its place in the industry – that of the distributed teams. Successful implementation of a distributed workforce by companies such as Mozilla, GitHub, MySQL, Buffer, WordPress, and more are a testament to the fact that geographical boundaries cannot deter employee productivity and accountability. In fact, WordPress has over 200 employees distributed all across the globe and contributing successfully in their individual job roles.

Having a distributed workforce has definite advantages. It brings more diversity in business, provides new perspectives to problem-solving, opens up a wider pool of trained resources, and reduces operational costs. Further, a study conducted by BCG and WHU-Otto Beisheim School of Management showed that well managed distributed teams can outperform those who share an office space. However, ensuring high productivity of a distributed engineering team demands ninja-like management precision.

In our experience of years working in a distributed setup with our clients, we realized that one of the greatest benefits of having such a workforce was the immense intellectual capital we had harnessed. We now have some truly bright engineers working for us. Our client’s teams located in the United States and our team in India successfully collaborate on software projects without a hitch. Let’s take a look at how we make these distributed engineering teams work productively, focus on rapid application delivery and produce high-quality software each time.

Have a Well Defined Ecosystem

First, it is imperative to have a well-defined ecosystem for a distributed team to work and deliver high-quality applications in a cost-effective manner. You need to have the right processes, knowledge experts, accelerators, continuous evaluation of tools and technologies in use, strong testing practices, etc. Along with this, it’s key to establish clear communication processes and optimal documentation. Leverage business communication tools and dashboards for predictability, transparency, to avoid timeline overruns. Further, it is essential to have all the important project stakeholders such as the product owner, the team lead, the architecture owner etc. together at the beginning of each project to outline the scope and technical strategy for a uniform vision.

Have Designated Project Managers In Each Location

Distributed teams demand a hybrid approach to project management. It helps, but may not be essential, to have the stakeholders shouldering lead roles such as the architects and the project managers to be in the same location or the same time zone as the client. Along with this, it is also essential to have a lead who will serve as the single point of contact and will be the local team’s spokesperson to streamline communication, help the team stay on track and avoid delivery delays.

Appropriate Work Allocation and Accountability
Appropriate work allocation is an essential ingredient that can make or break distributed engineering teams. Instead of assigning work based on location, it should be assigned based on team capacity, skills, and the release and sprint goals. Having cross-functional teams that can work independently with inputs from the product owner help considerably in increasing team productivity so that work can be redistributed in the case of sprint backlogs. Giving each team member ownership of a feature can also increase accountability, measurability and ultimately the productivity of the entire team.

Have a Common Engineering and Development Language
At the onset of the project, it is essential to establish the engineering and development language for project success. Having clearly outlined development procedures, code styles, standards, and patterns contributes to building a strong product irrespective of the teams’ locational distribution as the code merges and integrations are likely to have much fewer defects. It is also important to have aligned and standardized tools to avoid spending time understanding or troubleshooting tool configurations or properties. Having a team buy-in regarding the engineering methodology (are you going to use TDD, BDD or traditional agile etc.?) helps in eliminating subjectivity and ambiguity regarding the same. It is essential to also have clearly outlined coding standards, technologies of choice, tools, and architectural design to avoid misalignment of values and engineering standards.

Such relevant information should also be published and maintained in the shared community (a virtual community across the distributed teams that serves as a single information source) using tools and dashboards that provide comprehensive information at a glance even for the uninitiated.

Leverage Time Zones Optimally

In order to ensure the same level of communication in a distributed team as found in a co-located team, there has to be impeccable time zone management by establishing some overlapping work hours. By doing so it becomes easier to involve the key stakeholders in sprint planning, sprint review, daily stand-ups, retrospectives etc. In the case of a distributed team, it makes sense to break down sprint planning into two parts – one, which determines what each team is doing on a high level and develops the understanding of Sprint backlogs and dependencies. And two, for detail clarification and breaking down of stories into ‘tasks’. It is also important to have a remote proxy for Sprint review to establish what each local team has completed.

Testing is another important aspect that can impact the productivity of distributed engineering teams. Since most distributed teams leverage ‘Follow the Sun’ principle, activities such as testing can be passed on to the other time zone. So, by the time the development team is back to work, the testing work is already done. This can significantly improve the productivity of the engineering team.

Have An Integrated Code Base

When working towards ensuring the productivity of distributed engineering teams it is imperative to have a single code repository to ensure that everyone checks the same code base. Ensuring that all teams have access to the same CI server to ensure that all builds and tests are run against any iterations prevents build breakages and the eventual productivity loss. Along with this, it is also essential to have a hot back-up server in each location to battle adversities such as server downtimes, power outages etc.

Along with all this, there is another critical ingredient that helps in making distributed engineering teams more productive…trust. It is essential for the distributed teams to trust one another and function as a single cohesive unit. Understanding cultural differences, respecting time zones, and having clear communication between team members are few things can build trust within team members, foster collaboration and contribute towards creating a highly productive distributed engineering team. That’s ours- what’s your story about distributed engineering teams?

Acceptance Criteria vs. Acceptance Tests – Know the Difference

Testing is at the heart of new development methodologies such as Behavior Driven Development, Test Driven Development and of course, Agile. In a previous blog on the role of testing in Behavior driven development we touched upon two topics, Acceptance Tests and Acceptance Criteria and how BDD has changed the approach towards these testing stages. In this blog, we take a look at these similar sounding and yet very different concepts.

It thus becomes essential to first define what the product is expected to do and the conditions it must satisfy to be accepted by a user. In order to achieve this, testers need to flesh out comprehensive ‘user stories’, then iterate criteria specific to each of these user stories and define the value proposition, characteristics of the solution and user flow. Testers then need to develop test cases based on these user stories and define conditions that need to be satisfied for the product to be “acceptable” by a user. These set of conditions that define the set of standards that the product or piece of software must meet are called ‘Acceptance Criteria’.

Loosely speaking, Acceptance Criteria documents the expected behavior of a product feature. It also takes into account cases that could have been missed by the testing team while developing test cases. Defining the Acceptance Criteria is the first testing step that comes after writing user stories. Usually, the Acceptance Criteria is concise, largely conceptual, and also captures the potential failure scenarios. Acceptance Criteria are also called ‘Conditions of Satisfaction’. These consist of a set of statements that specify the functional, non-functional and performance requirements at the existing stage of the project with a clear pass or fail result. Defined Acceptance Criteria outline the parameters of the user story and determine when a user story is completed.

Acceptance Criteria should always be written before development commences so that it can successfully capture the customer intent rather than iterate functionalities in relation to the development reality. Acceptance Criteria thus should be written clearly in a simple language that even non-technical people, such as the customer and product owner, can understand. The idea behind writing an Acceptance Criteria is to state the intent but not the solution and hence it should define ‘what’ to expect rather than ‘how’ to achieve or implement a particular functionality.

What is Acceptance Tests?

Acceptance Testing is the process that verifies if the installed piece of code or software works as designed for the user. It is a validation activity that uses test cases which cover scenarios under which the software is expected to be used and is conducted in a ‘production-like’ environment on hardware that is similar to what the user or the customer will use. Acceptance Tests assert the functional correctness of the code and hence contain detailed specifications of the system behavior in relation to all meaningful scenarios. Unlike Acceptance Criteria that defines the expected behavior of a particular feature, Acceptance Tests ensure that the features are working correctly and defines the behavior of the system and hence demand more detailed documentation. Acceptance Tests check the reliability and availability of the code using stress tests. This also checks the scalability, usability, maintainability, configurability and security of the software being developed and determines whether a developed system satisfies the Acceptance Criteria and checks if the user story is correctly implemented.

Acceptance Tests can be written in the same language as the code itself. So, these tests can be written in Gherkin language used commonly in Behavior Driven Development.

While Acceptance Criteria are developed prior to the development phase by the product owners or business analysts, Acceptance Tests may be implemented during product development. They are detailed expressions that are implemented in the code itself by the developers and the testers. Acceptance Testing is usually performed after System Testing before the system is made available for customer use. To put it simply, Acceptance Tests ensure that the user requirements are captured in a directly verifiable manner and also ensure that any problems that were not identified during integration or unit tests are captured and subsequently corrected.

There are two kinds of Acceptance Testing, namely:

Internal Acceptance Testing
Performed in-house by members who are not involved in the development and testing of the project to ensure that the system works as designed. This type of testing is also called Alpha Testing.
External Acceptance Testing – This testing is of two types:
a) Customer Acceptance Testing – Where the customer does the testing.
b) Beta Testing or User Acceptance Testing – Where the end users test the product.

Conclusion:
In conclusion, we can say that amongst other things, the main difference between Acceptance Criteria and Acceptance Testing lies in the fact that while the former defines ‘what needs to be done’ the latter defines ‘how it should be done’. Simply put, Acceptance Tests complete the story started by Acceptance Criteria and both together make sure that the story is complete and of high functional value.

Behavior Driven Development and Automation Testing

Organizations across the globe are feeling the pressure to churn out error free products faster and reduce the time to market. This has led to the growth of new development methodologies that put testing in the heart of product development and foster a growing collaboration between testers and developers. Some of these methodologies have also driven an increased impetus on test automation. Behavior Driven Development or BDD is one such methodology followed in agile product development. BDD is often considered the extension of Test Driven Development. The focus of BDD is on identifying required behavior in the user story and writing acceptance tests based on them. BDD also aims to develop a common language to drive development so that the team members understand the critical behaviors expected of an application and realize what their actual deliverables are.

It has become imperative for the development team to understand the business goals a product is expected to achieve if they wish to deliver quality products within the shortest timeframe. BDD puts the customer at the heart of the development approach. This approach iterates these requirements, the business situation and the acceptance criteria in Gherkin Language. The Gherkin Language is domain and business driven and easy to understand. The BDD approach identifies the behaviors that contribute directly to business outcomes by describing them in a way that is accessible to developers, domain experts, and testers. BDD leans heavily on collaboration, as the features and requirements are written collaboratively by the Business Analysts, Quality Analysts and the developers, in the GWT i.e. ‘Given-When-Then’ scenarios. These ‘scenarios’ are then leveraged by the developers and testers for product development. One of the main advantages of using Behavior Driven Development is that it makes the conversation between developers and testers more organized and that this approach is written in plain language. However, since the scenarios are written in a natural language they have to be very well written in order to reduce maintenance woes which otherwise can become tedious and time-consuming. The focus of BDD is to ensure that the development vocabulary moves from being singularly ‘test based’ to ‘business based’.

Role of Test Automation in Behavior Driven Development

We believe that the role of testing and test automation is of primary importance to the success of any BDD initiative. Testers have to write tests that verify the behavior of the system or product being built. The test results generated are in the form of success stories of the features and hence are more readable by the non-technical user as well. For Behavior Driven Development to be successful it becomes essential to identify and verify only those behaviors that contribute directly to business outcomes.

Testers in the BDD environment have to identify what to test and what not to test, how much should be tested in one go and to understand why the test failed. It can be said that BDD rethinks the approach to Unit and Acceptance testing. The sense is that acceptance criteria should be defined in terms of ‘scenarios’ as expressed in the GWT format. Here ‘Given’ defines the preconditions or contextual steps used to define the test case, ‘When’ is the event or the steps that have been taken and ‘Then’ is the final outcome of the scenario. Much like Test Driven Development, BDD too advocates that tests should be written first and should describe the functionalities that can be matched to the requirements being tested. Given the breadth of the acceptance tests in BDD, test automation becomes a critical contributor to success.

Since Behavior Driven Development focuses on testing behavior instead of testing implementation it helps greatly when building detailed automated unit tests. Testers thus have to focus on writing test cases keeping the scenario rather than code implementation in mind. By doing so, even when the requirements change, the testers do not have to change the test, the inputs and outputs to accommodate it. This makes unit testing automation much faster, less tedious and more accurate.
Since test cases are derived directly from the feature file set ups and contain example sets, they make for easy implementation and do not demand extra information for the test data. The automated test suites validate the software in each build and also provide updated functional and technical documentation. This reduces development time and also helps in driving down maintenance costs.

Though Behavior Driven development has its sets of advantages, it can sometimes fall prey to oversimplifications. Testers and development teams thus need to understand that while failing a test is a guarantee that the product is not ready to go to market, passing a test also does not indicate that the product is ready for release. At the same time, this framework will only work successfully when there is close collaboration between development, testing and business teams, where each can be informed of the updates and the progress in a timely manner. It is only then, that cost overruns that stem from miscommunications can be avoided. Since the testing efforts are moved more towards automation and cover all business features and use cases, this framework ensures a high defect detection rate due to higher test coverage, faster changes, and timely releases.

Have you moved the BDD way in your development efforts? Do share what challenges you faced, and how did the effort pan out?

Automated Testing of Responsive Design – What’s On & What’s Not?

With growing digitization and increasing proliferation of the smartphone and tablets, it is hardly a wonder that mobile devices are geared to become the main drivers of internet traffic. The Visual Networking Index predicted that internet traffic would cross the zettabyte mark in 2016 and would double by 2019. It’s not just browsing, but commerce too that is becoming more mobile. Criteo’s State of Mobile Commerce report states that four out of ten transactions happen across multiple devices such as smartphones and tablets.

Clearly, we have established ourselves in the ‘mobile age’. Since mobile has evolved as such a big driver of the internet, it is but obvious that websites today have to be “responsive” to the screen size. In 2015, when Google launched their mobile friendly algorithm ‘responsive web design’ became a burning hot topic of discussion across the internet. Having a responsive design made sure that the user experience was uniform, seamless and fast, search engine optimization was preserved and the branding experience remained consistent.

The Testing Challenge To Automate Responsive Design
Responsive web design takes a single source code approach to web development and targets multiscreen delivery. It is on the basis of the screen size that the browser content customizes itself and also customizes what content to display and what to hide. Clearly, it becomes absolutely essential to test that the web application renders itself correctly irrespective of the screen size. Equally obviously, this demands multi-level testing. Given the sheer number and variety of mobile devices in the market and different operating systems, testing of responsive web designs can become an onerous task.

Replicating the end user experience to assess if the application renders itself well across the plethora of devices can be tricky…an application running on a desktop monitor will render differently when scaled down to an 1136-by-640 pixel screen of an iPhone. Testing of responsive applications hence means not only testing them across popular devices but also across newly launched devices in the market. Clearly, responsive websites need intensive testing but testing across so many devices, and testing on each available browser or operating system and choosing configurations of physical devices can be a challenge. This means more test case combinations across devices, operating systems and browsers, and verifying these combinations across the same code base.

In the testing of responsive designs, it becomes essential to check that the functionalities, visual layout and performance of the website is consistent across all the digital platforms and user conditions. This demands continuous testing of new features and testing that the website is working optimally across browsers, networks, devices and operating systems.

Given the intensiveness of testing, having a robust test automation framework for testing responsive applications is a must. This can dramatically increase the efficiency and thoroughness of the testing efforts.

Visual testing
In order to ensure that a responsive application responds to any device in a functionally correct manner, it is important to increase focus on UI testing. Given the complexity of responsive design, you need to identify all the DOM (Document Object Model) objects on the desktop as well as the mobile devices and add relevant UI checkpoints to check the visual displays. Alignment of text, controls, buttons, images, font size, text readability across resolutions, and screen sizes have to be tested thoroughly. Automating these tests ensure that any issue gets highlighted faster and the feedback loop becomes smaller thus ensuring that there are no application glitches.

Performance Testing
Slow page load times and wrong object sizes are two of the biggest challenges of responsive design. Given that an average website has over 400 objects, the key need is to ensure that the size properties of the objects do not change and the images load onto different viewpoints correctly. Functional tests in responsive web applications must be done keeping in mind real world conditions. This involves testing against usage conditions such as devices, network coverage, background apps, location etc. and ensuring that the web content displays correctly irrespective of the device size. Automating client side performance testing helps testers assess the time the content takes to load on different devices and assess the overall performance of the website. Memory utilization, stress tests, load tests, recovery tests etc. need to be performed extensively to assess application performance. Utilizing test automation to write comprehensive tests cases for the same makes performance testing much easier, faster, and in the end more Agile.

Device Testing
While it might not be possible to test the web design on each and every device available in the market, leveraging mobile device simulators to test application functionality goes a long way. You can test the application across all form factors, major OS versions and display density on the devices. Automating navigation testing helps testers gain greater test coverage of the user paths and allows for a faster end-to-end run-through of the responsive web application. With test automation, it becomes easier to create content breakpoints, test screen real estate, and also transition between responsive and non-responsive environments.

Regression Testing
Testers need to extensively adopt test automation to increase the scope of regression testing of responsive web applications. With each new functionality, testers have to make sure that there are no breaks and that the basic application functionality remains unaffected despite the new additions. Given that these tests are voluminous and must be repeated often, leveraging test automation for regression testing ensures the application performance remains unhindered.
To maximize the ROI from your automation initiative, it makes sense to turn to analytics and assess how the responsive web application is used. By leveraging analytics, testers can narrow down the choice for the device and network testing, identify breakpoints and easily assess what should appear on the screen when navigating from one breakpoint to another.

In a nutshell, by carefully choosing candidates for automation, testers can expedite the testing process, achieve greater test coverage and deliver a seamless user experience – and that’s always useful!

Flash Forward – From Flash to Adobe Animate CC

Adobe Flash has dominated some sectors like eLearning development for 20 years now and for a very long time, Flash was the last word in interactive, engaging websites too. Flash played a critical role in creating rich media content and this ease drove its wide applicability for eLearning courses and websites. In the early days of Flash, numerous businesses adapted Flash to create interactive web portals, games, and animated websites. Some of the notable names were Cartoon Network, Disney, Nike, Hewlett-Packard, Nokia, and GE. Flash saw further growth and penetration when Adobe introduced hardware accelerated Stage3D to develop product demonstrations and virtual tools. As Flash leaves its teens, though, the world has fundamentally changed.

There are multiple reasons why Flash needed a revamp. Its lack of touch support on smartphones, compatibility issues on iOS, need of Flash player to run content, and non-responsiveness were some of the major reasons that incited Apple to move away from Flash, and the die was cast.

Adobe recognized that it was time for a change and when Adobe announced a rechristened product Adobe Animate CC, better days seem to be coming for developers. We believe that Adobe did the right thing at the right time. With the new name came a more user-friendly outlook and a more market-focused product designed to keep up with the latest trends.

Most reviews of the product suggest the following reasons for you to look at Adobe Animate CC:

  1. Adobe Animate CC leverages the familiar tool User Interface like Flash for rich media creation and extends its support to HTML5 canvas and WebGL.
  2. Conversion of existing flash animations into HTML5 Canvas can be achieved without any issues. It can convert fairly lengthy animations also with ease.
  3. The Motion Editor provided in Animate CC allows granular controls over motion between properties making it much easier to create animations.
  4. Animate CC provides an output that can easily integrate into responsive HTML5 frameworks and that can scale based on the device size. It, however, doesn’t publish a fully responsive output by itself.
  5. Animate CC provides a library of reusable content to speed-up production and animation in HTML5.
  6. Animate CC provides multi-platform output and supports HTML5, WebGL, Flash, AIR, video and even custom extensions like SVG. It can also export animations in GIF to be shared online and GAF format that can be used on gaming platforms like Unity3D.
  7. Animate CC’s timeline feature optimizes audio syncing in animations which is a major plus over HTML5. It also enables easy control of audio looping.
  8. Videos can be exported in 4K quality using Animate CC – keeping up with the latest trends of video consumption preferences. Videos can have custom resolution and the latest Ultra HD and Hi-DPI displays.
  9. Animate CC also provides the ability to create vector brushes similar to Adobe Illustrator.
  10. Animate CC has added Typekit Integration in a tool-as-a-service that helps developers to choose from the library of high-quality fonts.

Some reviewers have commented that images did not load on some occasions while creating animations but that they did load when the tool was refreshed. However, this issue can be easily optimized by pre-loading images. There could also be other parameters like browser performance and network issues causing delay in image loading.

It has also been observed by some that some of the filters did not render the expected results in HTML5 output compromising on the visual quality and richness of the output. These filters are Gradient Glow, Gradient Bevel, Quality, Knockout, Inner Shadow, and Hide Object of the Drop Shadow Filter. Given the focus that Adobe has on the product – we anticipate these issues will surely get addressed in the future releases.

One interesting thing to note is that Animate CC has eliminated the dependency on Flash player completely though, it continues to support flash output. The tool also complies with the latest Interactive Advertising Bureau (IAB) guidelines that are widely used in the cartoon industry by giants like Nickelodeon and Titmouse Inc. For those seeking a much more in-depth feature comparison between Adobe Animate CC and Flash versions we recommend visiting Adobe’s website.

It’s early days yet but our view is that Animate CC could well be instantly applicable to over one third of the graphics created today that use Flash, and are delivered on more than a billion devices worldwide. Adobe Animate CC marks the beginning of new era for Flash Professionals, just around the time when Flash reaches its 20th Anniversary!

How Software Development Has Transformed In Front Of My Eyes?

“Software development is technical activity conducted by human beings.” Niklaus Wirth

It’s been about 30 years since I started my career as a Software Developer and while I don’t wear my coders hat as often as I like, I still think of myself as a developer. In my conversations with the many smart developers at ThinkSys I can’t escape the feeling though that software development now is a completely different species from what it was when veterans like me started out, and this transformation has particularly accelerated in the last few years. Indulge me as I look back at what has changed – some of this may resonate with you too!

First, though, what’s driving this change? There’s not far to look – a combination of the Cloud, Internet, and Mobility is the primary factor. This combination has changed the way people use software, as has the purpose for which they use that software. Marc Andreesen famously spoke of software eating the world – essentially this has come true and pretty much every aspect of our life is being driven by software. This integration of life and technology has made customers more demanding, and more prone to consider moving their business if their expectations are not met. What does this mean for software development? To my mind this is what is driving the “release fast, iterate often” movement in software development.

Given that need, the traditional SDLC, driven by the “Waterfall Model” has obviously been found wanting. That was too rigid, too linear, and just not nimble enough to meet the new demands. The Agile Manifesto offered a ready alternative and companies enthusiastically adopted it. As the Cloud and SaaS-based models of delivering software took over Agile got even more accelerated and transformed into Continuous Delivery of software. This transformation is now more or less complete. Last year an Atlassian survey found that 77% of all software development organizations practiced Agile development, and 50% practiced Continuous Delivery (CD).

I had written about how software development teams have changed with the advent of Agile and DevOps. The change in the team has been necessitated by a change of process. The software development process has become more granular, testing is now carried out in parallel to development, automation is much more central, business and domain owners are much more closely integrated into the software design, and there is a continuous effort to elicit and integrate customer feedback into the software. In parallel, software development teams have become more distributed and multi-locational. This has made the creation of software a much more collaborative process. In fact, the Atlassian survey mentioned earlier found that 78% of the software organizations were using a Distributed Version Control System (like Git).

Another big change we have seen is in the way software is architected. Self-contained, monolithic architectures started making way for Service Oriented Architecture (SOA), that focused on the creation of modular elements that delivered business services. This has now further transformed into Microservices, with even more granular, potentially reusable services carved up from the old monolith. Apart from the need for speed, what drove this change was also the peculiarities of the Cloud and Mobile. There is now a greater emphasis on a small footprint and more efficient usage of resources. Another sea-change is in the emphasis on “Usability” at every stage. In the early days, there was almost a sense that software would be used by “experts” and the attention was on functionality. Today software lives and dies by the User Experience. So much attention is now lavished on the UI and the UX – how it looks, how easy is it to use, and how intuitive it is to get to learn is now key. Eric Raymond said, “The easiest programs to use are those which demand the least new learning from the user.”

As it happens, we have found a better way to make software, and the programming languages have kept pace. As I watched we have moved from C to Java, .Net, and PHP to Python and Ruby now As I watched we have moved from C to Java, .Net and JavaScript to JQuery and Angular/React now. Coding has become closer to how we naturally express ourselves. Along with these languages came their incredibly powerful libraries that made coding easier, faster, and more intuitive. In parallel came the open source wave – several special-purpose, community-contributed pieces of code that helped meet the very same objective, while being reusable too. This is where much change is anticipated, in ways that we may not even consider possible. There is talk of how developers may need to become data scientists in the future – a nod to the anticipated impact of Machine Learning and Artificial Intelligence.

However much the process of software development has changed, one thing I can be sure of is that software will always be built to address the needs of customers and that the intent will always be to deliver value, with quality. In that quest, it will always be design that is paramount. One quote from Louis Sruggley goes, “Without requirements or design, programming is the art of adding bugs to a text file.” Some things will never change, I guess!

How Agile & DevOps Have Transformed The Traditional Software Development Team?

Over course of the past decade and a little more, software development has witnessed a sea change. Gone are the times when software development was a somewhat isolated process when development, business, operations and testing teams worked in their own siloes. As the need for software development speed increased, it led to the rise of new development methodologies. Agile and lean software development thus gained ground, as it helped development teams gain the momentum that they needed to put software into production and reduce the time-to-market. As the emphasis on digitization increased, agile development methodologies further evolved and we are now witnessing a rise of the DevOps culture.

DevOps further pressed the accelerator on the software delivery. Research from Puppet Labs shows that organizations using DevOps have been able to deploy code up to 30 times faster and that this code was 50 times less likely to fail – incredible! The rise of technologies such as Cloud and adoption of open-source technologies have further pushed organizations to adopt DevOps. A Gartner survey estimated that 25% of the 2000 global IT organizations planned to adopt DevOps in 2016 alone.

The success of methodologies such as agile and DevOps certainly hinges on the dexterity and capabilities of the development teams. At the same time, these methodologies demand a culture shift within the organizational environment too. Teams adopting these methodologies cannot work in siloes and expect project success. Clearly, one of the greatest contributors towards the success of these methodologies is a collaboration between teams and departments.

Most considering DevOps to be the extension of agile, it becomes apparent that there is a need for greater collaboration between the software development team, the IT professionals, and the business team. The idea here is to develop user-centric software, and to do that successfully, development teams need access to faster user feedback. There is also an increased emphasis on software automation to increase the speed of delivery and the focus is on creating an environment where development, testing, and product releases proceed seamlessly like a well-oiled machine. In order to do so, there has to be a tighter integration of QA and operations in the development process itself. That implies a team structure with business, development, quality, and operations, all tied together.

Development methodologies such as DevOps requires poly-skilled and autonomous teams that have a set of common goals. Unlike traditional software development where the development team produced the code and simply handed it off to the testing and QA team for evaluation, in these modern day methodologies, the development and operations team have to function collectively as they are equally responsible for the service and product maintenance.
Instead of just focusing on one aspect of development and production, DevOps engineers have to assist the development and QA team and help them address the development needs so that the final product that is delivered is of high quality, is error free and can be pushed out for production within the shortest timeframe. From automating builds to setting up servers to writing custom scripts for a specific technology stack, DevOps engineers have to act as the ultimate facilitators for high-quality software development.

A bit from Thomas Friedman’s book ‘The World is Flat’ talks about the change in culture and organizational shift as the world transforms, and this could apply to these development methodologies as well. He states that factors such as globalization, the opening of borders of developing countries, progress of software and growth of the internet are compelling the software industry to seek flatter structures in order to achieve competitive success. This demands not only the flattening of software releases but also of organizational structures which are only made possible by the “convergence of automation, tools, collaboration, and industry best practices and patterns.”

The motivation behind developing methodologies such as Agile and DevOps and using them in conjunction was to take the frustration of releasing and maintaining software out of software development. To do this, teams have to be cross-functional and experienced not only in development but also in areas such as database, configuration management, testing and infrastructure which can only be possible when development and operations teams work collaboratively. Thus we have seen the rise of developer-testers, release managers, automation architects, security engineers, utility technology players and experience assurance experts etc. who not only understand development but also understand business operations and user requirements.
As the velocity of software development increases, the traditional, role-bound software development team becomes increasingly redundant. With these new philosophies, every role is shifting and evolving. Teams thus need a wider understanding of what they need to achieve, develop it, test it and then deploy it. Thus, the role of the developer does not end with producing certain lines of code and the tester is not just expected to assess if a certain functionality is achieved. Everyone in the value chain is required to validate the user experience of the application under real-life conditions and scenarios. Companies that have adopted Agile and DevOps successfully have only been able to do so when they realized that they have to simplify the fragmented product development process and improve interactions between business and IT and move from a project-oriented to a product oriented mindset.

Is Microsoft A Secret Enterprise Mobility Challenger?

While declaring the 2015 numbers, Microsoft COO Kevin Turner, announced one product as their “hottest” product and predicted that it would be a “$ 1 Billion product in the future”. This product was the Enterprise Mobility Suite (EMS), only a year in the market at the time of Turner’s enthusiastic endorsement. At the time, Corporate VP Brad Anderson said, “As the value and necessity of EMM grows, we see customers evolving their approach, innovating, and bringing new needs and demands every day. On a really regular basis I see the traditional point solution MDM vendors, or the identity and access management vendors, struggling to keep up with these demands – customers are seeking more comprehensive and holistic solutions that are architect-ed for (and can scale to) the cloud.”

In the time since that announcement, while EMS doesn’t yet seem to have hit that landmark number, Microsoft’s focus on the space is clearly apparent. In line with Anderson’s observation, there also seems a clear recognition of the kind of customers to target. Organizations that appreciate comprehensive solutions, with robust architecture, the ability to scale, and a significant cloud story seem to be in their sights – in other words, Enterprise customers. Does that mean that Microsoft could be a secret challenger for the Enterprise Mobility market?

First, though, perhaps Microsoft’s Enterprise focus shouldn’t come as a surprise – this focus has always been there. The revenues for the first Q of the ongoing Financial year, as a case in point, showed the most significant growth in Office commercial revenue (up 5%), server revenue (up 11%) and in other Enterprise friendly products like the “intelligent cloud” (up 8%) and Azure (up a whopping 116%).

Microsoft thus has a ready opening with its Office suite of products – still a staple in most enterprises. It seems a natural extension for those enterprises to turn to Office 365 when they want to extend the reach of those productivity apps into the mobile workforce. Microsoft reported that Office has already been downloaded 340 million times on iPhones, iPads and Android devices. This may only be the tip of the app iceberg, though – there are a further 669000 apps for phones, tablets, and desktops on the Windows Store. This signifies a clear attempt by Microsoft to build a comprehensive ecosystem for the Enterprise.

Another beachhead seems to have been established by the organic growth of Microsoft in the Enterprise segment with its newly-found Cloud focus. Microsoft reported that 80% of the Fortune 500 were on the Microsoft Cloud. It’s not only large Enterprises turning to the Microsoft Cloud, 40% of Azure’s revenue comes from ISVs and startups. This is significant because there is a natural coming together of the Cloud and Mobility all across the Enterprise. This forms a potent combination that Microsoft is trying to set itself up to exploit.

One clear sign of Microsoft’s Enterprise interest is visible in how EMS has evolved. The product, earlier called the Enterprise Mobility Suite is now named as the Enterprise Mobility + Security (still EMS) and includes a significant nod to the concern of Enterprises everywhere about Security. A key part of the suite is Microsoft Intune. Intune has capabilities for managing mobile devices, apps, and even PCs, all from the Cloud. This allows employees to leverage data and corporate apps, on-demand, from anywhere – all while being perfectly secure. The suite also features Azure Rights Management, which makes securely sharing protected files inside and outside the organization very easy. Other significant inclusions are Azure Active Directory Premium for managing identity and access, Azure Information Protection for protecting information, and Microsoft Advanced Threat Analytics + Microsoft Cloud App Security for identity-driven security applications. Coupled with Azure Active Directory, all this forms a pretty formidable secure shield designed for Enterprise acceptability.

This does not mean that it’s all smooth sailing, though. Microsoft’s mobile story has had its fair share of ups and downs as Nokia and Windows Mobile will attest. That said, though, there is clearly some fresh thinking sweeping through the corridors of power at Redmond WA and that could well mean a new Enterprise Mobility generation built on a solid Microsoft foundation – stranger things have happened!

The Fundamentals of Continuous Integration Testing

Over the past decade, most organizations have adopted agile development methodologies to develop products faster and reduce their time to market. As agile development methodologies have evolved they have given rise to newer development practices that put testing at the heart of development. Continuous Integration is one such development practice that adopts an architecture based approach, to provide development teams more flexibility and ensure the production of high-quality software even more frequently.

Testing and development expert, Martin Fowler defines Continuous Integration as a practice where a development team integrates their work on a daily basis and makes way for multiple daily integrations, which are then verified by an automated build and test. This allows for faster error detection and significantly reduced integration errors. This makes the development process more efficient. Adopting the Continuous Integration methodology enables teams to develop software that is more cohesive and robust, even while releasing at extremely short intervals.

One of the main contributors to Continuous Integration success is continuous testing. Delivering high-quality software in extremely short timeframes is not possible unless each and every piece of code is tested thoroughly. Testing, thus, becomes one of the most important elements of Continuous Integration. The development team also has to work on developing a comprehensive automated testing suite both at the unit and at the functional level to guarantee code quality.

Since the goal of Continuous Integration is to never break the build, it becomes imperative that all measures are taken to ensure that untested or broken code does not get committed and that strict version control policies are implemented. The entire environment here has to be based on automation to ensure that despite any and every code addition to the application results in a releasable version. Additionally, it also has to ensure that the application can be built in any version.

Continuous Integration is a subset of Continuous Delivery, where the built application is delivered to the testing and production teams to ensure application reliability, generate faster user feedback and ensure continued high performance. Continuous Integration automates the manual stages of application development and hence, makes the development and deployment process faster. Frequent and incremental builds are the hallmark of Continuous Integration and this effectively eliminates the requirement of integration testing.

In Continuous Integration, the developers submit the new code and/or code changes to a central code repository where the release manager merges the code with the main branch and then pushes out the new release. It is the Continuous Integration system that monitors the version control system for changes and then launches the build after getting the source code from the repository. The server then runs unit tests and functional tests to check the functionality, validity, and quality of the product. The same package is then deployed for acceptance testing and is then deployed on the production server.

Automation plays a critical role in Continuous Integration to ensure that all packages can be deployed on all the servers at the click of a button. To enable this, its becomes essential to maintain a single source code, automate the build, keep the build fast and make it self-testing. Along with this, every commit has to be built on the integration machine, keeping systems transparent and easily accessible by the invested parties to test the product in a production environment. The Continuous Integration server has to competently inform the respective teams of each successful build and alert the team in case of any failure. The team then has to ensure that the issue is fixed at the earliest.

It is clear that with Continuous Integration, testing, too, is continuous. Some key testing areas to focus on are:

  • Ensure that continuous regression tests are run in the background and that they capably provide regular feedback to minimize regression defects.
  • Carry out continuous performance tests to study the application for response time, identify changes in speed, reaction time, and application consistency.
  • Conduct frequent load tests to clear performance goals and make the application ready for use.
  • Ensure that the load tests begin in smaller and incremental scenarios and culminate as one large package.
  • Conduct continuous scalability testing to gauge the throughput, network, and CPU memory usage, to reduce business risks.
  • Ensure end-to-end functional tests to test the functionality of the product in different scenarios.

Continuous Integration sets the stage for Continuous Delivery by reducing manual intervention. It also brings greater transparency to the entire development and QA process which makes taking factual decisions to improve efficiency faster. Since Continuous Integration testing aims to find bugs and code breaks faster with the help of more automation, the development process becomes more cost efficient. Steve Brodie, CEO of Electric Cloud aptly summarizes the importance of automation in a Continuous Integration and Continuous Delivery world as, “You’re only as fast as the slowest element, the slowest phase in your software delivery pipeline”. Clearly, automation lies in the heart of the Continuous Integration process and makes the magnitude of change implementation smaller, more manageable and efficient.

Practical Ideas to Stay Creative in Software Testing

As Software testers, or for that matter in any profession, we are frequently chastised of being monotonous in our line of thought, lacking in ingenuity or plainly speaking, being dumb altogether. To top it all, we are frequently reminded to think “out of the box” ( as if that sounds very imaginative..;P). However creativity is not a trait sovereign to a select few and can be inculcated through minor attitudinal shifts. The following are not a set of guidelines to become the next Einstein, but will certainly aid you in your endeavor to become the next torchbearer in the field of testing.

1. Old is Gold:

You are 5 years into your job as a hardened testing professional. You’d think your long forgotten Java skills are as useful to you today, as a typewriter might be in today’s times. Right?….WRONG. For you to grow into an A-league professional, a developer’s insight into de-bugging problems is worth its weight in gold. Sharpening those coding skills from college will help you adapt better and customise the use of a testing tool. Getting involved in app developing contests as a pass time will throw you into an ocean of bugs which are in tune with the latest methodologies deployed towards making glitch free software applications of today

2. Brainstorm:

At the time of creating a test case, you are normally provided with an excel sheet, enumerating the various guidelines from the developer team and the requirements from the client. For once, chuck ’em out of the window (not literally of, course). Get your pencil and notebook out (yeah just like the good ‘ol school days).For five minutes, think of all the test case scenarios you can dish out of your mind. Jot them down and review. At the end of it all, what do we have? A plethora of test cases without even glancing at the mundane list of client requirements.

3. The Programmer community:

It always bodes well to have an ear for the latest IN thing which the Developer team is discussing. What latest tools and languages they are deploying to make major efficiency gains and value additions in their products. Be in the loop. You don’t have to stop with the development teams, maintain a good rapport with the designers, the DevOps engineers and the like. While you are it, be on the lookout for any glaring gaps they are overlooking, the critical bugs which they keep encountering regularly. Try to eke out ways to rectify them.

4. Coding, a hobby, really?

Getting involved in app developing contests as a pass time, will throw you into an ocean of bugs which are in tune with the latest methodologies deployed towards making glitch free software applications of today

5. Peers and that theory of getting more by giving:

It’s essential to keep abreast of what your colleagues are up to. One way of effectively keeping tabs is to engage through Social Media, being part of conferences and forums. They say, knowledge improves by sharing it further. Point is to share your own solutions while at same time picking out on the brains of others

6. News, news and some more news:

Thanks to the tsunami of knowledge which has engulfed us today, finding information is not so hard. Cherry-picking that, which is relevant to our need is the hard bit. Handy tools like RSS and Evernote can help us capitalise on our hunt for new ideas, by keeping us up to date with the current events in the field of Software Testing. Rummage through old dusted owner manuals and forage through the history of testing problems which still hold relevance.

Creating your own ideas and building them up with others’ after reviewing them, will unleash THE CREATIVE tester inside, that you always dreamt to be.

10 Essential Testing Stages for your Mobile Apps

2016 was truly the ‘year of the mobile’. Mobile apps are maturing, consumer apps becoming smarter, and there is an increasing emphasis on the consumerization of enterprise apps. Slow, poor-performing and bug-riddled apps have no place in today’s smartphone. Clearly, mobile apps need to be tested thoroughly to ensure the features and functionalities of the application perform optimally. Given that almost all industries are leaning towards mobile apps (Gartner predicts that there will be over 268 billion mobile downloads in 2017 that will generate a revenue of USD $77 billion) to make interactions between them and their consumers faster and more seamless, the demand for mobile testing is on the upswing. Mobile app testing is more complex than testing web applications primarily because of the need to be tested on different platforms. Unlike web application testing where there is a single dominant platform, mobile apps need to developed and then tested on iOS, Android, and sometimes more platforms. Additionally, unlike desktops, mobile apps must deal with several device form factors. Mobile app testing also becomes more complex as factors such as application type, target audience, distribution channels etc. need to be taken into consideration when designing the test plans and test cases.

In this blog post, we look at ten essential testing stages for mobile applications:

  1. Installation testing:
    Once the application is ready, tests need to conduct installation testing to ensure that the user can smoothly install or uninstall the application. Additionally, they also have to check that the application is updating properly and does not crash when upgrading from an older version to a newer one. Testers also have to ensure that all application data is completely removed when an application is uninstalled.
  2. Target Device and OS testing:
    Mobile testers have to ensure that the mobile app functions as designed across a plethora of mobile devices and operating systems. Using real devices and device simulators testers, they can check the basic application functionality and understand the application behavior across the selected devices and form factors. Applications also have to be tested across all major OS versions in the present installed base to ensure that it performs as designed irrespective of the operating system.
  3. UI and UX testing:
    UI and UX testing are essential to test the look and feel of the application. This testing has to be done from the users’ perspective to ensure that the application is intuitive, easy to use, and has industry-accepted interfaces. Testing is needed to ensure that language- translation facilities are available, menus and icons display correctly, and that the application items are synchronized with user actions.
  4. Functionality Testing:
    Functionality testing tests the functional behavior of the application to ensure that the application is working according to the specified requirements. This involves testing user interactions and transactions to validate if all mandatory fields are working as designed. Testing is also needed to verify that the device is able to multitask and process requirements across platforms and devices when the app is being accessed. Since functional testing is quite comprehensive, testing teams may have to leverage test automation to increase coverage and efficiency for best results.
  5. Interrupt testing:
    Users can be interrupted with calls, SMS, MMS, messages, notifications, network outage, device power cycle notification etc. when using an application. Mobile app testers have to perform interruption testing to ensure that the mobile app can capably handle these interruptions by going into a suspended state and then resuming functions once the interruptions are over. Testers can use monkey tools to generate multiple possible interrupts and look out for app crashes, freezes, UI glitches, battery consumption etc. and ensure that the app resumes the current view post the interruptions.
  6. Data network testing:
    To provide useful functionalities, mobile apps rely on network connectivity. Conducting network simulation tests to simulate cellular networks for bandwidth issues to identify connectivity problems and bottlenecks and then study their impact on application performance fall under the purview of network testing. Testers have to ensure that the mobile app performs optimally with varying network speeds and is able to handle network transitions with ease.
  7. Hardware keys testing:
    Mobile apps are packed with different hardware and sensors that can be used by the app. Gyroscope sensors, proximity sensors, location sensors, touchless sensors, ambient light sensors etc. and hardware features such as camera, storage, microphone, display etc. all can be used within the application itself. Mobile testers thus, have to test the mobile app in different sensor specific and hardware specific environments to enhance application performance.
  8. Performance Testing:
    The objective of performance testing is to ensure that the mobile application is performing optimally understated performance requirements. Performance testing involves the testing of load conditions, network coverage support, identification of application and infrastructure bottlenecks, response time, memory leaks, and application performance when only intermittent phases of connectivity are required.
  9. Load testing:
    Testers also have to test application performance in light of sudden traffic surges, and ensure that high loads and stress on the application does not cause it to crash. The aim of load testing is to assess the maximum number of simultaneous users the application can support without impacting performance and assess the applications dependability when there is a surge in the number of users.
  10. Security testing:
    Security testing involves gathering all the information regarding the application and identifying threats and vulnerability for the application using static and dynamic analysis of mobile source code. Testers have to check and ensure that the applications data and network security functionalities are in line with the given guidelines and that the application is only using permissions that it needs.

Mobile application testing begins with developing a testing strategy and designing of the test plans. The added complexity of devices, OS’ and usage specific conditions adds a special burden on the software testing function to ensure the most usable and best-performing app. How have you gone about testing your mobile apps to achieve this end?

A Simple Guide to Interoperability Testing

Interoperability testing is one of the types of non-functional testing which ensures the interoperability quality of the software. The term ‘interoperability’ might be heard by you, but if you actually aware of the term. Many of us derive or interpret incorrect meaning of the word – interoperability. So, before discussing the interoperability testing, first we try to know the correct and exact meaning of the word interoperability.

What is interoperability?

In general, interoperability is the ability of a system to work and interact with other systems and applications. Interoperability may be defined as the property or ability of a system to provide and accept features from other system or application. Interoperability quality provides independence in interacting, sharing and exchanging data and information with other system, without interrupting the intend functionalities.

Consider the example of banking application system. A banking application needs to interact, exchange and share data and information with the application of other bank or same bank but different branch or any third party/merchandise vendor for the purpose of financial and business transactions.

Banking Application Interoperability
A user made a financial transaction at XYZ bank from his account to transfer some amount of money to other account of ABC bank. The banking applications of both the banks incorporated with interoperability features interacts with each other independently without interrupting their intended functioning, and shares and exchanges data and information, such as account numbers, credentials, beneficiary name, bank branch, IFSC code, amount of money and other relevant information with each other to carry out the financial transaction/transfer of money.

Now, what is interoperability testing?

Interoperability testing is a form of non-functional testing to achieve and maintain interoperability traits in the system. This form of testing is done to ensure end-to-end functionality between two interacted system based on their specified standards and protocols i.e. irrespective of standard, protocols followed by two systems to execute their intended function, they interact independently to share and exchange data and information.

Further, interoperability testing is used to verify and validate and data loss, incorrect and unreliable operations, and unreliable performance between the two systems.

How to perform interoperability testing?

Interoperability testing may be carried out through following steps in the subsequent manner.

  • Step 1: In first step, proper planning and strategy need to be define and describe. The planning and strategy involves the understanding of each application present in the network, including behaviour, response, functionalities and input taken, output generated of each and every application. Thus, the network of applications is to be considered as one single unit.
  • Step 2: Implementing certain approaches and techniques like requirement traceability matrix (RTM) to map each requirements with that to test case, and thereby eliminating the scope of any unvisited or left requirement. Test plans and test cases are derived and developed. Further, some essential non-functional attribute of the network of applications such as security and performance also needs to be verified and validated before executing the interoperability tests.
  • Step 3: Executing interoperability test cases with the subsequent activities of logging the defects, correcting the defects, retesting and regression testing after applying patches.
  • Step 4: Evaluating the test results with respect to RTM, to ensure complete coverage of the requirements and no requirements has been left out.
  • Step 5: Documenting and reviewing the approaches, steps and practices used in the testing, to further improve the testing process so as to get accurate and quality results.

What are challenges faced in the interoperability testing of the application?

  • Testing all applications generates a good amount of possible combinations which are difficult to test.
  • Differences between the environment where application is being developed and where it is installed may affect testing, in case any of the environments goes down.
  • Different environment issue also needs unique test strategy to encompass the need and features of both the environment.
  • Applications will be connected in network, thus adding network complexity to it would makes the task of testing even more difficult.
  • Root cause analysis, if defect is located.

Solution to these challenges in Interoperability testing:

  • Testing techniques and approaches like orthogonal array testing (OATS), cause effect graph, equivalence partitioning, bva and other similar approaches may prove beneficial in mapping the requirements independently with that to test cases so as to provide and ensure maximum test coverage.
  • Going through past information and data to study and analyse the conditions under which system crashes or breakdown and to estimate in how much it recovers from failure.
  • Making use of the above stated study to prepare proper plan and strategy.

Conclusion:

Interoperability testing is not an easy task to execute, but with the proper planning and strategy along with the information, data and experience gained from the past, interoperability testing guarantees the system’s interoperability quality to interact uninterruptedly and independently with other systems and applications.

My 2017 Software Industry Predictions

It’s that time of the year when we look into our crystal balls and make predictions for the year ahead. 2016 was a phenomenal year for the technology world. Technologies that emerged over the last few years, such as cloud, firmly planted their feet within the enterprise. Businesses changed their maneuvers to leverage their digital infrastructures and found new paths to engage with their customers and make their operations more efficient. What became increasingly evident over the past year was that the IT landscape had to change to accommodate the business challenges and that the enterprise was ready to adapt to the change brought forward by technological innovation. Here’s a look at what the year ahead promises – in my view at least.

  • New technologies provide new business opportunities
    2016 witnessed the rise of technologies such as Augmented Reality, Virtual Reality, IoT, Machine Learning etc. Forrester Research believes that Augmented Reality will be one of the top five technologies that will completely change the world over the course of the next three to five years. Consumers have been receptive towards these new technologies. Look at the success of Pokemon Go if you are looking for examples. As consumers become more open to adopting and experimenting with new technologies, it opens up new possibilities for organizations to create new opportunities by amalgamating data, mobile devices, applications to understand customer journeys better. We can thus expect to see tech budgets focus more on business technology in this new year.
  • Mobile testing all the way
    The World Quality Report 2016-17 discovered that while a large number of organizations were taking advantage of mobile solutions, mobile testing skills were relatively in their nascent stages in the development lifecycle. The lack of mobile testing experts and fragmented testing methodologies seems to have contributed to this. In 2017, however, as the number of consumer and enterprise grade mobile applications grow in demand and adoption, we can expect to see mobile testing strategies becoming more mature. Involving test engineers in the development process from the very beginning will be an assured way of improving business outcomes by delivering high quality and optimally performing app.
  • The future is cloudy
    IDC estimates that by 2020 “67% of enterprise IT infrastructure and software will be for cloud-based offerings.” We can expect to see more organizations move away from the on-premise infrastructure and adopt the cloud. As the demand for agility increases, digital transformation increases and more number of companies become global, organizations will be looking towards adopting cloud to drive innovation.
  • Test automation will become more mainstream
    To remain competitive, organizations will have to speed up their application development process. As the need for speedy deployments increases, 2017 will witness test automation become more mainstream. The focus on automation will be a great deal more as automation and new levels of testing to match the speed of development. Testing and application performance management tools will evolve more and provide organizations a more holistic view of their application development process and allow them to test new features.
  • The rise of Performance Engineering
    2017 is also expected to witness a greater impetus placed on performance to deliver best user experiences. To enable this, organizations will no longer just depend on performance tests but will increasingly focus on performance engineering to deliver consistent and uniform application performance across diverse platforms, devices, and operating systems.
  • Shift in the enterprise application landscape
    We can expect to see greater consumerization of enterprise applications. Instead of clunky enterprise apps, 2017 will usher in the era of consumer-quality enterprise applications that have intuitive user interfaces and an easily navigable information architecture even in the most complex systems. As multi-device collaboration becomes more mainstream, accessing files and information will become seamless across devices.
  • Agile Outbreak
    One of the biggest trends of 2017, I believe will be that the application of agile concepts will step out of the software/product development mode and will be applied in a much wider organizational context. Agile principle derivatives will become increasingly common in areas such as design/ merchandising strategy, design thinking, growth hacking etc. and forge interdisciplinary collaborations. Methodologies such as DevOps and Continuous delivery will also adopt agile to improve outcomes and build products, as well as organizations, that can be said to be well tested and bug-free. This means integrating testing into the build model. At an organizational level, agile concepts will be implemented to improve quality by ensuring scalability, availability, easy maintenance and simplification of complex systems. Agile concepts like transparency, inspection, continuous learning, process focus, flexibility, shorter feedback loops that can benefit each and every aspect of an organization will see greater adoption.

It is certainly a very exciting time to be in this industry as we gear up to face another year that’s full of technological potential and gear up to usher in the ‘age of the customer’.

Testing in the DevOps World-Test Automation the Sequel

“The most powerful tool we have as developers is automation.” – Scott Hanselman

In a previous blog about some stats that told the developing software testing story, we had identified the impact on testing strategies due to the growing adoption of DevOps as a key trend. Our CEO has also written in greater detail on how testing is changing due to DevOps. Looking back though it’s clear this story is still developing and that something more remains to be said. In other words, it deserves a sequel and the central role in this continuing story is reserved for Test Automation.

You may ask, why does Test Automation deserve this star billing? Well, consider the DevOps way for a bit. This is a world with several, almost continuous iterative releases, each within days, even minutes, of each other, all being pushed out to the final production environment into the demanding hands of paying customers. So many releases, so little testing time and so much pressure to deliver quality – has there ever been a more, theoretically, perfect case for automated testing? Let’s hope that puts the “Why” question to bed – now let’s move on to the “How” and “What”.

First, a look at the “How”. As was already apparent in Agile, with so many iterative releases following so close on the heels of each other, it is absolutely impossible to build your automation in parallel with the product-under-test. Thus, with DevOps, it becomes critically important to involve the test automation team at an early enough stage of the product planning to be able to anticipate, as much as possible, the direction the product is likely to take and automate early. This is also the time to plan for the automation carefully. Factors to consider include what conditions are most likely to remain reasonably unchanged and which are likely to undergo frequent changes? How reusable can you make the components of the automation framework? This is also a good time to define the objectives you are looking to achieve with the automation – Faster deployment? Better code quality? Greater confidence based on better regression tests? Essentially, start with the end in mind and measure as you go along to know if you are on the right track.

  1. So, on to the “What”. There is both the opportunity for creating a comprehensive test automation framework and the threat that some of it could be rendered irrelevant by the pace of change in the product.
  2. That said, though, there is value to considering automating the Unit Tests as there is a reasonable chance that several specific components will remain relatively stable over the course of the many iterations.
  3. The greatest value could well be in automating the regression testing – the maximum bang for the buck considering the sheer number of releases. Many DevOps models allow code to be delivered late in the cycle and fixes can be applied right until the time that specific release goes live. Automating the regressions will allow you to test the entire code after each such addition and this makes it far more likely that a high-quality, bug-free product goes out to the end customer.
  4. Among the central value propositions of the DevOps, way is continuous deployment on the operational infrastructure. Continuous deployment means continuous integration of code and of the code into the operational infrastructure. This is where automation can play a key role. An interesting approach being followed by many is to run integration testing separately and parallel from unit testing and sometimes even before unit testing. This approach believes that the integration testing is not impacted by the business logic and is only concerned with whether the product works on the deployment infrastructure. Automation helps test this quickly and relatively comprehensively.
  5. There is also great value on automating the testing for the build – deployment stage starting at the test environment itself. The objective is to run the tests in the test or development environment and ensure smooth deployment in the production environment.

A quote we like about DevOps goes, “DevOps is not a goal, but a never-ending process of continual improvement”. While agreeing with Jez Humble, who said this, perhaps we would like to add that this continual improvement is driven by continual testing, which is turn is based on a solid test automation platform. What do you think?

Entry and Exit Criteria in Software Testing

Software testing, an essential part of software development life cycle, is quite a vast and complex process that requires ample time and efforts of testers to validate software product’s quality and effectiveness. This process, though extensively helpful, often becomes tedious as it has to be executed a plethora of times across different platforms. Moreover, there are multifarious requirements that need to be considered and tested, which sometimes become a source of uncertainty for testers, mostly regarding where to commence & terminate testing. To avoid this confusion, specific conditions and requirements are established by the QA team, before the inception of testing, that helps testers throughout the testing life cycle. These conditions are termed as entry and exit criteria, which play a crucial role in software testing life cycle.

entry exit criteria

What is An Entry Criteria in Software Testing?

As the name specifies, entry criteria is a set of conditions or requirements, which are required to be fulfilled or achieved to create a suitable & favorable condition for testing. Finalized & decided upon after a thorough analysis of software & business requirements, entry criteria ensures the accuracy of the testing process and neglecting it can impact its quality. Some of the entry criteria, which are generally used to mark the beginning of the testing, are:

  • Complete or partially testable code is available.
  • Requirements are defined and approved.
  • Availability of sufficient and desired test data.
  • Test cases are developed and ready.
  • Test environment has been set-up and all other necessary resources such as tools and devices are available.

Both, development and testing phases are used as a source to define the entry criteria for software testing process, like:

  • Development phase/process provides useful information pertaining to software, its design, functionalities, structure, and other relevant features, which offer assistance in deciding the accurate entry criteria like functional and technical requirement, system design, etc.
  • From testing phase, following inputs are considered:
    • Test Plan.
    • Test Strategy.
    • Test data and testing tools.
    • Test Environment.

 

The entry criteria is mainly determined for four specific test levels i.e., unit testing, integration testing, system testing and acceptance testing. Each of these test levels require distinct entry criteria to validate the objective of test strategy and to ensure fulfilment of product requirements.

Unit Testing:

  • Planning phase has been completed.
  • System design, technical design and other relevant documents are properly reviewed, analysed and approved.
  • Business and functional requirements are defined and approved.
  • Testable codes or units are available.
  • Availability of test environment.

Integration Testing:

  • Completion of unit testing phase.
  • Priority bugs found during unit testing has been fixed and closed.
  • Integration plan and test environment to carry out integration testing is ready.
  • Each module has gone through unit testing before the integration process.

System Testing:

  • Successful completion of integration testing process.
  • Priority bugs found during previous testing activities has been fixed and closed.
  • System testing environment is available.
  • Test cases are available to execute.

Acceptance Testing:

  • Successful completion of system testing phase.
  • Priority bugs found during previous testing activities has been fixed and closed.
  • Functional and Business requirement has been met.
  • Acceptance testing environment is ready.
  • Test cases are available.

What is An Exit Criteria in Software Testing?

Exit criteria is an important document prepared by the QA team to adhere to the imposed deadlines and allocated budget. This document specifies the conditions and requirements that are required to be achieved or fulfilled before the end of software testing process. With the assistance of exit criteria, the team of testers are able to conclude the testing without compromising the quality and effectiveness of the software.

Exit criteria highly depends on the by-product of the software testing phase i.e. test plan, test strategy, test cases, test logs, etc. and can be defined for each test level, right from test planning, specification, and till execution. The commonly considered exit criteria for terminating or concluding the process of testing are:

  • Deadlines meet or budget depleted.
  • Execution of all test cases.
  • Desired and sufficient coverage of the requirements and functionalities under the test.
  • All the identified defects are corrected and closed.
  • No high priority or severity or critical bug has been left out.

Similar to entry criteria, exit criteria is also defined for all different levels of testing. Few of them are:

Unit Testing:

  • Successful execution of the unit tests.
  • All the identified bugs have been fixed and closed.
  • Project code is complete.

Integration Testing:

  • Successful execution of the integration tests.
  • Satisfactory execution of stress, performance and load tests.
  • Priority bugs have been fixed and closed.

System testing

  • Successful execution of the system tests.
  • All specified business and functional requirements has been met.
  • Priority bugs have been fixed and closed.
  • System’s compatibility with supported hardware and software.

Acceptance testing

  • Successful execution of the user acceptance tests.
  • Approval from management to stop UAT.
  • Business requirements got fulfilled.
  • No critical defects have been left out.
  • Signing off acceptance testing.

Conclusion:

Defining entry and exit criteria for a software testing process is an essential, as it helps the testing team to finish the testing tasks within the stipulated deadlines without compromising the quality, functionality, effectiveness, efficiency of the software.