How the Microservices Landscape Has Changed In The Last Year & a Bit?

2016 proved to be the year of Cloud, DevOps, and Microservices. While organizations across the globe realized that Microservices was a great way to leverage the potential of the cloud, it was made evident that DevOps and Microservices worked better together to provide business agility and increase efficiencies. It became evidently clear that traditional, large and monolithic application models and architectures did not have any place in the organization of the future. Technologies such as the cloud demanded application architectures that enabled greater scalability with workload changes and greater flexibility to accommodate the evolving needs of the digital enterprise. 2016 proved that monolithic application architectures running on the cloud did not deliver the promised benefits of the cloud and that a Microservices architecture was best suited to leverage the benefit of this technology.

  1. The Bump on the Road
    In one of our blogs published last year, we had spoken of Microservices and the testing needs of applications built using the microservices architecture. One of the greatest challenges of microservices testing is testing each and every component individually as well as a part of an interconnected system. Each component or service is expected to be a part of an interconnected structure and yet is independent of it. However, as Microservices adoption increased, a number of organizations also realized that despite the promise, latency issues when accessing these applications continued. Along with this, Microservices brokered by API management tools further escalated the latency problem since that introduced an additional layer between the user and the microservice. Also, Microservices used up a large amount of virtual machine resources when they were deployed on virtual machines.
  2. Microservices and Containers – A Match Made In Heaven
    In 2016, the value of using Microservices and the Cloud became evident. 2017 promises to show the value of Microservices with Containers to break the barriers that impede cloud usage. One of the key problems plaguing Microservices in 2017 is that of resource efficiency and Containers can be used to solve this problem. Organizations are leaning in to use Containers with Microservices. Containers increase the performance of these applications and aid portability and also decrease hardware overhead costs.Containers, unlike virtual machines, allows the break down of the application into modular parts. This allows different development teams to work on different parts of the application simultaneously without impacting the other parts. This aids the speed of development, testing, application upgrades and deployment. Since there is a reduced duplication of large software elements, multiple microservices can easily run on a single server. As compared to VMs, Microservices would deploy faster on Containers. This helps during horizontal scaling of applications or services with load or if a microservice has to be redeployed.Along with increasing resource and deployment efficiency, Container adoption in Microservices has been increasing owing to the level of application optimization Containers offer. Container clouds also are networked on a much larger scale and allow the service discovery pattern to locate new services in the microservices architecture. While this level of optimization can be achieved by VM’s, it becomes more complex since these demand explicit management policies.
  3. Rise of Microservices In DevOps:
    The past year also saw an increased use of Microservices in DevOps. Since Microservices offers the benefits of scalability, modifiability, and management owing to its independent structure, it fits in comfortably with the DevOps concept. Microservices offer the benefit of increased agility owing to shorter build, test and deployment cycles, making it perfect to complement a DevOps environment. With the increasing adoption of Containers in Microservices, organizations are now able to use the DevOps environment better to deliver new services by streamlining the DevOps workflow. Fault isolation also becomes inherently easier by using Microservices in DevOps. Each service can be deployed independently and identifying a problematic component becomes easier.
  4. Automation Focus Increases:
    Organizations leveraging Microservices and DevOps are also increasing the levels of automation in the testing initiatives. Owing to the DevOps methodology, test automation has found a firm footing in the microservices landscape with testing in production, proactive monitoring and alerts becoming a part of the overall quality plan.A year is a long time in the field of software development. When it comes to Microservices we are seeing organizations leveraging development methodologies like DevOps and technologies such as Containers in a symbiotic manner to propel growth, increases efficiencies and improve business outcomes for all. How has your Microservices journey been?
Please follow and like us:
0

Ensuring High Productivity Even With Distributed Engineering Teams

The traditional workspace has been witnessing an overhaul. From traditional cubicles to the open office concept to the standing workspace… new trends to increase employee productivity arise every day. One such concept, a fundamentally different way of working when it arrived, has now cemented its place in the industry – that of the distributed teams. Successful implementation of a distributed workforce by companies such as Mozilla, GitHub, MySQL, Buffer, WordPress, and more are a testament to the fact that geographical boundaries cannot deter employee productivity and accountability. In fact, WordPress has over 200 employees distributed all across the globe and contributing successfully in their individual job roles.

Having a distributed workforce has definite advantages. It brings more diversity in business, provides new perspectives to problem-solving, opens up a wider pool of trained resources, and reduces operational costs. Further, a study conducted by BCG and WHU-Otto Beisheim School of Management showed that well managed distributed teams can outperform those who share an office space. However, ensuring high productivity of a distributed engineering team demands ninja-like management precision.

In our experience of years working in a distributed setup with our clients, we realized that one of the greatest benefits of having such a workforce was the immense intellectual capital we had harnessed. We now have some truly bright engineers working for us. Our client’s teams located in the United States and our team in India successfully collaborate on software projects without a hitch. Let’s take a look at how we make these distributed engineering teams work productively, focus on rapid application delivery and produce high-quality software each time.

Have a Well Defined Ecosystem

First, it is imperative to have a well-defined ecosystem for a distributed team to work and deliver high-quality applications in a cost-effective manner. You need to have the right processes, knowledge experts, accelerators, continuous evaluation of tools and technologies in use, strong testing practices, etc. Along with this, it’s key to establish clear communication processes and optimal documentation. Leverage business communication tools and dashboards for predictability, transparency, to avoid timeline overruns. Further, it is essential to have all the important project stakeholders such as the product owner, the team lead, the architecture owner etc. together at the beginning of each project to outline the scope and technical strategy for a uniform vision.

Have Designated Project Managers In Each Location

Distributed teams demand a hybrid approach to project management. It helps, but may not be essential, to have the stakeholders shouldering lead roles such as the architects and the project managers to be in the same location or the same time zone as the client. Along with this, it is also essential to have a lead who will serve as the single point of contact and will be the local team’s spokesperson to streamline communication, help the team stay on track and avoid delivery delays.

Appropriate Work Allocation and Accountability
Appropriate work allocation is an essential ingredient that can make or break distributed engineering teams. Instead of assigning work based on location, it should be assigned based on team capacity, skills, and the release and sprint goals. Having cross-functional teams that can work independently with inputs from the product owner help considerably in increasing team productivity so that work can be redistributed in the case of sprint backlogs. Giving each team member ownership of a feature can also increase accountability, measurability and ultimately the productivity of the entire team.

Have a Common Engineering and Development Language
At the onset of the project, it is essential to establish the engineering and development language for project success. Having clearly outlined development procedures, code styles, standards, and patterns contributes to building a strong product irrespective of the teams’ locational distribution as the code merges and integrations are likely to have much fewer defects. It is also important to have aligned and standardized tools to avoid spending time understanding or troubleshooting tool configurations or properties. Having a team buy-in regarding the engineering methodology (are you going to use TDD, BDD or traditional agile etc.?) helps in eliminating subjectivity and ambiguity regarding the same. It is essential to also have clearly outlined coding standards, technologies of choice, tools, and architectural design to avoid misalignment of values and engineering standards.

Such relevant information should also be published and maintained in the shared community (a virtual community across the distributed teams that serves as a single information source) using tools and dashboards that provide comprehensive information at a glance even for the uninitiated.

Leverage Time Zones Optimally

In order to ensure the same level of communication in a distributed team as found in a co-located team, there has to be impeccable time zone management by establishing some overlapping work hours. By doing so it becomes easier to involve the key stakeholders in sprint planning, sprint review, daily stand-ups, retrospectives etc. In the case of a distributed team, it makes sense to break down sprint planning into two parts – one, which determines what each team is doing on a high level and develops the understanding of Sprint backlogs and dependencies. And two, for detail clarification and breaking down of stories into ‘tasks’. It is also important to have a remote proxy for Sprint review to establish what each local team has completed.

Testing is another important aspect that can impact the productivity of distributed engineering teams. Since most distributed teams leverage ‘Follow the Sun’ principle, activities such as testing can be passed on to the other time zone. So, by the time the development team is back to work, the testing work is already done. This can significantly improve the productivity of the engineering team.

Have An Integrated Code Base

When working towards ensuring the productivity of distributed engineering teams it is imperative to have a single code repository to ensure that everyone checks the same code base. Ensuring that all teams have access to the same CI server to ensure that all builds and tests are run against any iterations prevents build breakages and the eventual productivity loss. Along with this, it is also essential to have a hot back-up server in each location to battle adversities such as server downtimes, power outages etc.

Along with all this, there is another critical ingredient that helps in making distributed engineering teams more productive…trust. It is essential for the distributed teams to trust one another and function as a single cohesive unit. Understanding cultural differences, respecting time zones, and having clear communication between team members are few things can build trust within team members, foster collaboration and contribute towards creating a highly productive distributed engineering team. That’s ours- what’s your story about distributed engineering teams?

Please follow and like us:
0

Software testing for Microservices Architecture

Over the last few years, Microservices has silently but surely made its presence felt in the crowded software architecture market. The Microservices architecture deviates from the traditional monolithic application built where the application is built as a single unit. While the monolithic architecture is quite sound, frustrations around it are building especially since more and more applications are being deployed in the Cloud. Microservices architecture has a modular structure where instead of plugging together components, the software is componentized by breaking it down into services. The applications, hence, are built like a suite of services that are independently deployable, scalable and even provide the flexibility for different services to be written in different languages. Further this approach also helps enables parallel development across multiple teams.
testing microservices architecture
Quite obviously, the testing strategy that applied to monolithic needs to change with the shift to micro services. Considering that applications built in the micro services architecture deliver highly on functionality and performance, testing has to cover each layer and between the layers of the service and at the same time remain lightweight. However, because of the distributed nature of micro services development, testing can often be a big challenge. Some of the challenges faced are as follows:

  • An inclination of testing teams to use Web API testing tools built around SOA testing which can prove to be a problem. Since the services are developed by different teams, timely availability of all services for testing can be a challenge.
  • Identifying the right amount of testing at each point in the test life cycle
  • Complicated extraction logs during testing and data verification
  • Considering that development is agile and not integrated, availability of a dedicated test environment can be a challenge.

Mike Cohn’s Testing Pyramid can help greatly in drawing the test strategy to identify how much of testing is required. According to this pyramid, taking a bottom-up approach to testing and factoring in the automation effort required at each stage can help address the challenges mentioned above.

  1. Unit Testing
    The scope of unit testing is internal to the service and is written around a group of related cases. Since the number of unit tests is higher in number they should ideally be automated. Unit testing in micro services has to amalgamate Sociable unit testing and Solitary unit testing to check the behaviors of the modules by observing changes in their state and also look at the interactions between the object and its dependencies. However, testers need to ensure that while unit tests constrain the ‘behavior’ of the unit under test, the tests do not constrain the ‘implementation’. They can do so by constantly questioning the value of the unit test in comparison with the maintenance cost or the cost of implementation constraint.
  2. Integration Testing
    While testing the modules in isolation is essential, it is equally important to test that each module interacts correctly with its collaborator and test them as a subsystem to identify interface defects. This can be done with the help of integration tests. The aim of the integration test is to check how the modules interact with external components by checking the success and error paths through the integration module. Conducting ‘gateway integration tests’ and ‘persistence integration tests’ provide the assurances help in providing fast feedback by identifying logic regression and breakages between external components which ultimately helps in assessing the correctness of logic contained in each individual module.
  3. Component testing
    Component testing in microservices demands that each component is tested in isolation by replacing external collaborators using test doubles and internal API endpoints. This provides the tester a controlled testing environment and helps them drive the tests from the customers perspective, allows comprehensive testing, improves test execution times and reduces build complexity by minimizing moving parts. Component tests also identify if the microservice has the correct network configuration and its capability to handle network requests.
  4. Contract Testing
    The above three tests provide a high test coverage of the modules. However, they do not check if the external dependencies support end-to-end business flow. Contract testing tests the boundaries of the external services to check the input and output of the service calls and test if the service meets its contract expectation. Aggregating the results of all the consumer contract tests helps the maintainers make changes to a service, if required, without impacting the consumer and also help considerably when new services are being defined.
  5. End-to- End Testing
    Along with testing the services, testers also need to ensure that the application meets the business goals irrespective of the architecture used to build it and test how the completely integrated system operates. End-to-end testing thus forms an important part of the testing strategy in micro services. Apart from this, considering that there are several moving parts for the same behavior in micro services architecture, end-to-end tests identify coverage gaps and ensure that business functions do not get impacted during architectural refactoring.

Conclusion
Testing in micro services has to be more granular and yet, at the same time, avoid become brittle and time-consuming. For a strong test strategy, testers need to define the services properly and have well-defined boundaries. Given that the software industry is leaning in greatly towards micro services, testers of these applications might need to change processes and implement tests directly at the service level. By doing so, not only will they be able to test each component properly, they will also have more time to focus on the end-to-end testing process when the application is integrated and deliver a superior product.

Please follow and like us:
0