Testing Strategies For The eCommerce Shopping Season

eCommerce is now a high-octane space with almost all retailers vying to make a winning online presence. As the holiday season kicks into gear with Black Friday and Cyber Mondays and continues on to Christmas, online retailers have to ensure that just like their brick and mortar store, their online store too is ready to service the heavy footfalls in the weeks ahead. Large retailers such as Walmart and Target too have worked on their online stores to avert incidents that they faced in 2015 where their eCommerce stores buckled under the pressure of heavy traffic on Black Friday and Cyber Monday. eCommerce companies, both big and small, are working to improve the digital experience provided by their eStore by improving reliability and increasing capacity to handle and manage high traffic. An Adobe Digital Insights report shows that approximately “35% of Americans are ready to shop right at the dinner table to ensure a good deal”. A Synchrony Financial survey shows “that more than half of holiday shoppers say the best deals are online, and 37% report they plan to do more of that this year, given the pretty much, anytime, anywhere convenience.” It thus becomes essential that the online retailers provide a seamless and hassle-free shopping experience for greater profits in the holiday season.

While having a great digital strategy forms an essential part of increasing sales in the holiday season, one way eTailers can ensure that their online store performs optimally is by testing their website. In this blog, we take a look at a few testing strategies for the eCommerce shopping season.

  • Catalog and segment infrastructure – ease of use:
    Given that the number of items and related discounts increases considerably during the shopping season, eTailers must ensure that all these items are displayed correctly on the Products page. Along with this, testing that all products and the associated discounts are reflecting correctly, the cataloging of the products has been done correctly and that product browsing is easy become imperative. Testers also have to ensure that all search options reflect and display correctly, the number of products on a particular page present themselves correctly, there is no duplication of products on the next page and ensure that pagination and filtering options work in harmony so that the user can browse through the website with ease.
  • Load and Performance testing:
    Research from load balancing and cyber security solutions company Radware showed that slow eCommerce websites contribute to 18% of shopping cart abandonments. Thus, testers need to make sure that the website loads fast, especially when the traffic to the website is high. Testers should look at the historical data and then assess the spike in the traffic that can be expected on the website. Along with the traffic, testers need to test the web application components such as the hardware, the database, and the network bandwidth to assess if these can handle the anticipated load and accordingly adjust the application’s performance profile. Additionally, testers will also need to assess how many concurrent requests the system can handle at maximum load, assess if response time for all test paths is acceptable, assess the reason for poor website performance such as large data sets, or browser incompatibility etc. Extensive load testing will also determine if the website needs to deploy more load balancers to eliminate the problem of refused connections that ultimately leads to disgruntled customers.
  • Mobile testing:
    Pymnts.com estimated that Mobile sales grew by almost 53% from 2014 to 2015 on Cyber Monday and accounted for USD $514 million in revenues. According to a report by Dynatrace, over 50% millennials (who are the largest and fastest growing demographic in the US) who use smartphones do more holiday shopping from their mobile devices than in-store. Thus testers have to make sure that they do not ignore performance testing for mobile and ensure that the mobile application or the mobile website does not crash under peak pressure. Along with the overall performance testing for mobile, testers have to also assess problems and get solutions for problems such as mobile latency and conduct mobile network speed simulations for optimal performance.
  • Shopping cart and payments:
    Testers have to make sure that all products in the shopping cart display correctly when the user proceeds to checkout. Given that people are pressed for time, they need to ensure that the checkout process is smooth and that all discount codes reflect correctly. Regression testing with all active and inactive codes thus becomes important. Testers also have to make sure that the discount codes are not putting an undue amount of load on the database as this too, can impact the performance of a website. Finally, they need to check that all the payment systems in use function appropriately even during peak traffic.
  • Security:
    eTailers also must account for the security of their customers. Data from ACI Worldwide reveals that while eCommerce transaction grew by 21% in 2015 from Thanksgiving, the attempts grew by just 8%! Testers thus should make sure that the security layer is not compromised by ensuring secure handling of incoming and outgoing data, doing more penetration tests to identify vulnerabilities and taking a multi-layered approach to security.

Conclusion:
With eCommerce sales estimated to cross $414.0 billion by 2018, making sure that your eCommerce website performs according to the expectations of the customer, especially during the holiday shopping season becomes imperative. By taking a methodical and planned approach to eCommerce testing, eTailers can make sure that they can unwrap the holiday season with profits.

Test Design – The Crucial Step to Test Automation

Recently, our test automation experts were having a conversation with an organization which was restarting its test automation project. The discussions started with the company narrating how their earlier test automation project failed miserably after 13 months incurring them huge costs, lot of time, and worse of all, loss of faith of the team in test automation. The company had started their initiative with the purchase of expensive test automation technology. Then they wrote automation scripts for most of the manual test cases and started running those. The end result? A huge number of automated test scripts which require high maintenance, require human intervention to run and are not useful for the product at all.

The problem we see in most scenarios is that the discussions around test automation start around what to automate and what not to automate. Ideally, what needs to be defined is what to test and what not to test. That’s what is called test design. Test design is a crucial phase for testing. Test Design involves analysis of the product specifications and coming up with test cases to validate the product functionality. This is 100% human effort and cannot be automated. It requires the involvement of domain experts, software development experts, and testing experts who need to work together to prepare a test plan with great attention to detail. Test design is what makes or breaks the success of test automation.

“More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.”

-Boris Beizer, Software Testing Techniques

An effective test design involves defining the test cases to test the software. The test cases should be created in such a manner that those can easily read, written, and maintained. The objective of the test cases should be to not only find the bugs in the product but also improve the overall product experience for the user. Test case maintenance is one of the crucial aspects which is often overlooked. Test Design needs to consider this aspect and make sure that the test cases are designed in such a way that those are easily maintainable, even by the people who did not create those in the first place. Especially in the case of test automation projects, the test design should aim towards reducing the maintenance costs for test development. The test design needs to align with the business goals such as faster time to market, more test coverage, and increased team confidence.

Similar to a software development project, the test automation project also needs to go through design, development, architecture, and maintenance. The test automation needs to have product development mindset and the test automation suite needs to follow the product roadmap.

Typically, a good test design involves –

  • A detailed thinking and design of what to test, how to test, designing of the test, and the execution plan.
  • Test authoring using method such as Model-based testing, boundary value analysis, action-based testing, error guessing etc. and defining of the keywords and action works.
  • Test case design for bug identification, impact, and maintainability.
  • Having a standard set of guidelines for writing the test cases.
  • Grouping of tests cases into small modules and suites.
  • Test case writing based on test objectives, steps, test data, and validation criteria.

Conclusion

In a nutshell, a good test design is a crucial step in test automation to achieve meaningful test coverage, find defects in the software, and build confidence in the testing team. It achieves more accuracy, is effective, and involves low maintenance. Contrary to common belief, a good test design is not very hard to do. It just requires dedicated thinking, patience, domain knowledge, understanding of good testing practices, and knowledge of design guidelines. Go for it and you will see the returns very soon!

Making Your Global QA Team A Super-Success

“More and more, in any company, managers are dealing with different cultures. Companies are going global, but the teams are being divided and scattered all over the planet.” – Carlos Ghosn

While Ghosn was mainly talking about automobiles, the software development business is one that has really taken to this distributed, global way of working. Software maker Atlassian surveyed 1300 companies engaged in software development and some of the results suggest as much. 72% of the companies confirmed that their software teams were spread across multiple locations and as many of 17% of the surveyed companies said that more than half of their entire team was remotely located. Given the rate at which companies are setting up offshore development centers and outsourcing to locations like India, it’s fair to assume that the locations being spoken of include some global ones. So, when software development, and by extension software testing teams, are global, what can you do to ensure they function effectively?

Here are some things we pay attention to – chances are you will find them useful too. By-the-way those looking for a list of the technical skills that go into the making of a global QA team are likely to be surprised by this post – our experience is that while tech skills are mandatory, other things make the crucial difference.

  • Clear direction:
    If everyone knows exactly where the destination lies it’s more likely they will tread the same path to get there. This identification of the strategic direction and business objectives and a clear articulation of that to the entire team will help ensure that everyone pulls in the same direction. In approaches like Agile or Continuous Development, where the product under development goes through many iterations in short spans of time, it is even more critical that everyone has a clear understanding of the strategic objective that are driving all the smaller changes.
  • Unified team:
    The teams and individual members at the global locations have to feel like they are part of a unified effort and not bit contributors at some outpost. This means that special effort has to be made to make them feel that way. One good way to do that is to apply similar policies across the locations for things like measuring effectiveness and productivity standards, rewards and recognitions, training opportunities and yes, even seemingly trivial things like availability of “swag” from the logo store – this really works. One note of caution, though, there are cultural issues specific to each global location and it is important to make allowance for those while defining.
  • Communication:
    As must already be apparent from the earlier points, communication between the engineering and business leadership and the remote teams is critical. The communication between the development teams and testing teams across locations and even between the elements of the testing team itself that may be spread across global locations has to be clear, frequent, structured and goal-oriented. If the efforts of everyone across the locations has to be aligned, it’s imperative to keep everyone updated on progress, possible obstacles, changes in strategy or operational tactics and especially with follow-throughs on actions involving contributions from other locations. Everyone should be clued into what everyone else is doing – just as would be the case if they were all located at the same place.
  • Process:
    The engineering leadership needs transparent visibility into what’s happening at all times across teams and the teams themselves need help to be able to steer a common course, irrespective of location. Defining (and of course, following) a clear process for all the testing teams to follow and including regular milestones and touchpoints to objectively measure progress along the way is the most effective way to help address both those needs.
  • Give Ownership with Accountability:
    Talking about achieving milestones, something we have found essential always is to inculcate a sense of ownership in the remote, global team. That means they have to own the milestones, along with the accountability for achieving them. This would require an effort to make the remote testing teams relatively self-sufficient or autonomous in day-to-day decision-making and making them responsible for the results they are expected to deliver.
  • Leverage technology:
    Technology can be an extremely valuable tool in achieving each of the points mentioned above. Great tools exist for everything – version control, test-case management, bug tracking, collaboration, and even communication. Many of these tools can be integrated with each other for greater effectiveness. In the Atlassian study quoted earlier, 82% of those surveyed, had integrated their source code management with a build system, issue tracker or both. Whatever be the technology chosen, it is important to bring teams across locations onto the same platforms in the interest of consistency.
    Working with global teams are a foregone conclusion today – chances are if you aren’t already working with them, you soon will be. While the factors highlighted here could help those starting out on this road, let’s ask those who have been doing this a while what are their secrets are? How did you make it work?

A Great Time to be ThinkSys – A Personal Note

Followers of my occasional posts on the ThinkSys blog site may find this post a bit out of character. I have usually spoken of issues that have equal parts technical or business impact, with a slight tilt towards software testing. This one though is more or a personal note. The last few weeks have been great for the entire ThinkSys team and I wanted to take this opportunity to say just why.

I’m just back from very busy few days in the company of the world’s leading software testing minds at TechWell’s StarWest. Like last year, ThinkSys was a platinum sponsor and, like last year, it was well worth the effort. We met a whole bunch of old friends and made a load of new connects. Clearly, the ThinkSys story of a focus on excellence, cost-effectiveness and efficiency in the QA, Test Automation, and web development services we offer has resonance.

It is this focus that has really made the last few weeks extra-special though. Over this time, we were selected to receive 3 major awards that really validate everything we are doing, as far as I am concerned.

First, we were selected for inclusion in the Inc 5000 – an annual compilation by the prestigious business publication, of the fastest-growing private companies in the US. Eric Schurenberg, President and Editor-in-Chief of Inc. has been quoted as saying, “No one makes the Inc. 5000 without building something great — usually from scratch.”. When you consider that companies like Microsoft, Oracle and Zappos.com have been included in this list in the past, you can understand the reason for our pride.

Apart from our presence in the US, our primary development center is in India. The region is important to us – as a market as well as a talent pool to hire from. In the first context, our selection as one among the Red Herring Asia 100 award is very significant for us. This is a list of the fastest growing technology companies in Asia. It’s a great honor to be included in this list because of the onerous selection criteria. Red Herring is known to evaluate companies across over 20 separate factors and the companies go through a 3-step review process, designed to eliminate everyone that don’t fully fit the bill. Red Herring says those that do make it could be in for “explosive growth”. That’s great for ThinkSys, I guess!

Our development center is located in the National Capital Region, just outside of Delhi India. In an accolade that, perhaps, represents our contribution to wider technology services sector in the area, global analysis and analytical research company, Worldwide Achievers, awarded us as the “Best Emerging Software Development Company in Delhi/NCR”. The award is decided after comprehensive market surveys and a great deal of research. We got this award from a representative of the highest level of the Indian government at the Worldwide Achievers Business Leader’s Summit & Awards 2016. My view is that this award is a sign of our growing influence as an employer as well as a thought leader in the technology areas we operate in – at least within the region we operate in.

All in all, this has been a great couple of months for ThinkSys. That said, and as I look at the first of our trophies sitting in our offices with pride, I take inspiration from knowing that the best has still to come. So with a promise to look at the future, let me end by thanking every one of our employees, associates, customers and everyone else who thought well of us – you have an equal share in our success!

TDD – Myths and Misconceptions

Test Driven Development (TDD) which is also referred to as test-driven design is a widely-accepted methodology in the field of software development. It typically relies on repetitive unit tests which are executed on the source code being developed. In a way, this is to start with the end in mind – designing the test cases that have to be “passed” for the code under development to be found “acceptable.” The idea of TDD is to expedite the development and testing process even though the tests may not be perfect in the first iteration. With each cycle of new code development, refactoring is done for the tests and the similar tests are executed. Finally, the iterative process is run till the desired functionality starts to work as expected. Despite being an integral part of the approach adopted by many software development teams there are certain myths or misconceptions that are associated with implementing TDD. Let’s take this chance to look at some of them, and hopefully, put them to rest.

Employing TDD is a time-consuming process

TDD actually comes into its own when it comes to measuring long-term benefits. The focus of every organization’s management team is to be able to deliver the end product within the agreed timelines. Adding TDD to the picture could increase the overall time-estimates and can thus be viewed as a time-consuming process. However, implementing TDD has long term advantages, as it helps to increase the overall productivity of the developers. TDD reduces the number of defects that are detected only once the application is deployed in production. Post production, much more time is spent in investigating, isolating and fixing the issue, which could have been avoided by using TDD. Having production issues also tarnishes the image and reputation of the organization and eventually leads to an unsatisfied customer.

R Jeffries and G Melnik, writing for IEE have reported that TDD helps in decreasing production bugs by 40-80%. Of course, it can add 10-30% to the initial development time and cost but considering the factors like defect fixes, and maintenance it is justified to implement TDD for the first phase of project development. In fact, the benefits of TDD have been tested on real projects by several companies and they found that the process is enormously beneficial.

Writing test cases before the code, is not feasible

With the introduction of the Agile approach, the process of software design has moved from the waterfall model to more iterative model. It is a challenge sometimes, to convince the developers to write test cases as the design is being built, but this helps developers to consider multiple scenarios which may lead to failure situations in real time. The idea of TDD is that it paves a way for the developers to think in a way which eventually leads to better design. TDD adds efficiency in code creation as there is an immediate feedback from the unit test. Also, developing code with unit tests allows parallel development and reduced debugging and testing time.

TDD is a software design method for better design

Technically, TDD is not really a software design methodology but it does lead to better software design! By developing your code using the TDD way, it automatically leads to a better-designed code as it is not possible to create meaningful test cases for poor or low-quality code. But the application design still needs to be catered to by having a clear idea of things like data structures, design patterns, overall system architecture, feasibility, and scalability considerations.

TDD always ensures complete code coverage

The goal of every testing organization implementing TDD is to have 100% code coverage. Code coverage is useful in the context when there are well-written tests. Writing incomplete or bad tests will not help in the above situation. Ideally, code coverage should be measured on a line or snippet if the covering tests are passing. Code coverage cannot guarantee 100% sufficiency. Usually, TTD code is likely to have a near-perfect code coverage but it may possible that a perfectly covered code will not necessarily be sufficiently tested.

James Grenning, who has written a lot about TDD said, “The best TDD can do, is assure that code does what the programmer thinks it should do. That is pretty good BTW.” TDD is a proven concept and is embraced by many developers and organizations. If this is the approach that works for you then go ahead and join the legion of followers – if the objections you encounter are from among those listed above, then you know just how true they really are.

Is Apple All Set to Own Enterprise Mobility With iOS 10?

At the recently concluded Apple’s WWDC, the iPhone7 was the object of everyone’s attention. During the conference, Apple also announced its first iOS double-digit release, the iOS 10. Even though not much time was spent discussing its fantastic capabilities, it seems clear that this major iOS update indicates that Apple is warming up to the enterprise. The business focus that iOS 10 demonstrates is hard to ignore with expanded EMM Enterprise Mobility Management) functionalities and greater device interoperability. Apple had made its enterprise intentions apparent last year by entering into a partnership with Cisco. With this partnership, enterprises can take advantage of the joint solutions that deliver an “experience that wouldn’t have been possible for either company to deliver alone”, according to Rowan Trollope, senior vice president, and general manager, Cisco. This year, with the iOS10 release, we can see how iOS devices will get IT capabilities and transform businesses through mobility. So what makes iOS 10 enterprise ready?

A mature and complete Enterprise Platform to develop fully vetted and integrated enterprise apps

With iOS 10, Apple has introduced a complete enterprise platform. This platform leverages their partner network to provide their clients with an ecosystem to enable business processes with enterprise mobility. Customers can now focus on meeting business objectives and spend less time worrying about backend management. The partnership with Cisco aims to ensure that mobile management is smarter, faster and easier for IT administrators, developers, and end users. Apple is also offering their enterprise customers a variety of integrated apps that meet their business requirements. This includes apps from leading vendors such as MobileIron, Cisco, IBM etc. who are a part of their Mobility Partner Program (MPP). With the help of the AppConfig Community, which is a collection of EMM (Enterprise Mobility Management) vendors and app developers, Apple enables developers to develop, configure, and secure mobile apps for the enterprise by employing the extensive app security and configuration frameworks available in the OS.

New developer features
The iOS 10 adds new features to the developer toolkit making it more enterprise efficient. The CallKit API with Cisco Spark allows VoIP developers to build apps that allow customers to use native iPhone apps to initiate VoIP calls. With this update, Apple demands that all apps submitted to the App Store by 2017 include ATS (App Transport Security) to prevent vulnerabilities and ensure secure and encrypted web connections. iOS 10 also introduces the Messages app store that allows developers to create apps with broader capabilities like powerful animations that can be used in iMessages.

Apps Open to Third-Party Developers
By opening up Siri to developers in iOS 10, Apple is looking to give users the benefit of an even better experience. Sirikit gives developers access to a variety of features as well as the full intelligence already built into Siri. Apple is opening up Siri as well as Maps to third-party developers so that they can bring out new features in these apps. Apple is also giving the Messages app to developers, thus making them compatible with more enterprise apps. Developers can download Sirikit from Apple, which provides an extension that communicates with Siri even if the app isn’t running.

Enhanced interoperability
With the iOS 9.3 update, Apple offered new ways to manage multiple devices. With the iOS 10, Apple enables these devices to work together harmoniously within the enterprise. Downloading new apps and data sharing across all Apple devices becomes much easier and this allows better continuity for users. Features such as Universal Clipboard allow users to move back and forth between Mac and iOS devices seamlessly without using solutions such as AirDrop. To prevent unintended data loss, Apple partners such as MobileIron are employing stricter copy/paste controls. The Auto Unlock feature allows users to unlock a Mac without typing a password, by using Apple Watch. Once the device is unlocked the Watch needs to stay in contact with the user’s skin to stay unlocked. If it loses skin contact then the device locks automatically and will then require the owners’ PIN. The user also has to be within three meters of the Mac to unlock it.

With iOS 10 Apple also takes care of the security of data in motion by supporting ‘VPN IKEv2 EAP-only’ mode. This makes VPN access to access corporate data, a critical component of enterprise mobility, secure.

Unified platform
With the iOS 10, Apple seems to be on the road to, what Gartner calls, “Unified Endpoint Management”. Now enterprise customers can make updates to any device running macOS Sierra using MDM. IT admins will also be allowed to implement policy restrictions to iCloud Keychain sync, iCloud photo library, Apple Music, Notes sharing, Find My Mac etc. by supporting new payloads to configure the IP firewall on Sierra. So we can expect EMM platforms to secure and manage most of the Apple devices, both desktop, and mobile leading the shift to desktop and mobile convergence.

Reduced workloads
With the iOS10, enterprises can expect to see workloads reduce to almost half since managing iPhones and other Apple devices can be done on the corporate network itself. This also helps in reducing network latency, employee downtime and increased helpdesk calls that stem from sluggish app performance due to network incompetencies.

Enchanced Security
Enterprises call for greater security and this is also an area of focus. Now wifi can be controlled (Enabled/disabled) through geo-fencing within MobiControl. Apple is also allowing supervised control over the apps installation and removal and control over system apps like – FaceTime, Siri, iTunes, iCloud, GameCenter, which will further enhance the enterprise user experience.

That apart, there have also been significant enhancements to security and performance improvements in areas like faster roaming, reduced web browsing failures and more reliable calls. It does appear that with the iOS 10, we are witnessing the evolution of the iOS into a mature enterprise-grade platform that offers more variety, ease of use and secure work-ready apps. With its new features and focus on integrating the new iOS into the partner ecosystem, it seems that Apple is looking to establish a beachhead within the enterprise. The world is watching with interest.

Some Numbers That Tell the Evolving Software Testing and Automation Story

In today’s rapidly changing world of testing practices, the focus of the software industry is now moving away from the Center of Excellence approach to a more wide-ranging strategy using Agile or DevOps practices. So much so, that based on the World Quality Report for 2016 the budget allocation for testing has increased 9% year-on-year. Considering this shift it would be interesting to see some of the top trends and likely evolutions for testing and test automation and some key numbers that tell the tale:

  • Development and Operations; DevOps
    DevOps is a software development technique which relies on tight coordination between Development, Testing, and Operations. The goal of DevOps is to improve the communication and collaboration between the departments so as to achieve, almost absurdly, faster software delivery. Out of all the participants of the World Quality Report survey, it was identified that 67% are using DevOps practices. This has a fundamental impact on testing as the basic idea of DevOps practice is to start testing as early as possible in the development cycle. This ensures that issues which are encountered only when the entire development is completed are identified much in advance. DevOps also helps teams focus on quality from day one of the projects, instead of finding out about the defects much later in the software development lifecycle.
  • Test Driven Development (TDD)
    TDD is a methodology that, in many ways, relies on the repetition of a short development cycle. The customer requirements are converted into very specific test cases and it involves execution of the new test cases. 39% of the participants in the World Quality Report are using TDD and with good reason too. A paper published in the Empirical Software Engineering journal mentions that ‘TDD seems to be applicable in various domains and can significantly reduce the defect density of developed software without significant productivity reduction of the development team.’ The study also compared four projects at Microsoft and IBM that used TDD with similar projects that did not employ this testing practice and found that defects per thousand lines of code reduced anywhere from 40% – 90% as compared to non-TDD projects.
  • Selenium and related tools
    Selenium is a software testing framework which is designed primarily for web applications. The advantages of using Selenium is that it is an open source tool which means there is no license cost involved. More and more companies are now embracing this open source technology for their automation testing needs. Gartner’s Magic Quadrant for Software Test Automation makes, the rather bold, claim that by the year 2020, Selenium WebDriver will become THE (emphasis ours) standard for functional test execution. and that this will be the key differentiator between vendors that cannot provide this test functionality.
  • Mobile testing in the cloud
    Mobile usage is rising, as everyone knows, but it is also true that the demands are becoming more complex. HTTP Archive reported that in the 4 years after 2011, the average size of a page served to the mobile rose from 340 KB to 1080 KB. 20% of these pages also had more than 100 resource requests and nearly 2 in 3 had 10 or more JavaScript requests. This suggests a growing emphasis on mobile testing. Mobile testing in the cloud can help the testing teams to be more productive and efficient. Usually, mobile testing lab devices can be established at a centralized location within the organization. With a variety of mobile cloud testing tools, the testing teams can now be spread across different locations where they can all access any of the devices from the mobile testing lab via the cloud. Cloud testing also gives the flexibility of not having to buy the same device more than once; instead, you can have one device in the cloud and the testers can reserve times for when they wish to test on them.
  • Digital transformation
    The CIO Report of 2016 reported that 2 out of 3 CIOs were already measuring their efforts against different KPIs from 12 months ago, directly because of concerns related to Digital Transformation. Digital transformation, today is becoming a key aspect for enterprises to achieve their business goals and to avoid getting disrupted by newer, more digitally adept business models. Many companies are now relying on this transformation approach for their success. With digital transformation, the challenge of software quality is a critical factor that organizations need to consider. The digital agenda is focused on user experience testing, as organizations feel the need to provide the best and most innovative digital experience to its end users. Security testing is also a vital consideration when it comes to Digital Transformation. The centrality of testing in the new, digitally transformed enterprise is visible in World Quality Report’s projection of a 40% rise in the budgets for QA & Testing by 2018.

Conclusion
Overall, the top priorities for QA and Automation testing executives continue to include Security, Customer experience, and efficient delivery. The main intention of QA and Automation testing is evolving to provide a seamless experience to the end users which in turn helps to improve the corporate image, end-user satisfaction, and ultimately business results.

Here’s What We Look for While Hiring Great Software Testing Pros

By: Rajiv Jain (CEO, ThinkSys)

“Great Vision without great people is irrelevant” – Jim Collins

Today the consumers of technology are everywhere…from the biggest enterprises to the youngster with a mobile phone. As we get increasingly conversant with technology, it becomes essential to develop good and robust software products, that are bug-free, error-free and fast. Today, no organization or individual has the time, patience or bandwidth to deal with slow performing, buggy apps that can pose a productivity or security threat. To enable organizations to deliver to the demands of speed and performance one thing climbing up the priority chart is testing. With the proliferation of test automation, many feel that the role of a tester is just that of a facilitator. This could not be further from the truth. Testers are the secret weapons that guarantee that the products that we develop perform to their optimal capacity.

Everyone looks for testing skills, experience, and certifications but there is something more that sets the great software testing pro apart. Here, in no particular order, are some of the things we look out for when hiring these super-testers:

  1. Prioritization Skills:
    A software tester has to have the ability to bite off more than he can chew and yet be able to chew and swallow properly. Software testers have to deal with a lot of workloads such as designing strategy, writing test cases, creating reports etc. In most cases, they have to work against tight timelines. Hence, having great prioritization skills and good time management are the hallmark of a great tester – somehow their day seems to have just that many more hours. He/she needs to understand what needs to be tested and when, which task should rank lower on the priority list, which tasks should be automated and which should be manual and which tasks need to be addressed immediately.
  2. Attention to Detail:
    Testing demands an eye for detail. It can be easy to miss a small bug but that small miss can compound into a bigger problem in a very short span of time. So while identifying glaring issues seems easy enough, a testing pro will be able to identify the not-so-obvious issues, the small stuff that can snowball into a big impact on the application at hand. Sherlock with an instinct for code is what we need.
  3. A Creative Mind:
    Test professionals who can think beyond what the software is expected to do or what the users expect from the software are the ones who truly shine in this area. They should have a creative mind that allows them to think of new ideas to test a product and come up with ideas to use test cases in different scenarios. Coming up with new ideas to test a product ensures that the product performs optimally when it does get stressed.
  4. Curiosity:
    Curiosity is a great trait in a tester. Only when a tester has a keen and curious mind will he/she try to think out of the box, look for problems in the unlikeliest of places and come up with intelligent solutions. A curious mind also gives the tester the ability to see the big picture and connect the dots to see how each action is impacting the project
  5. Ability to think from the user’s perspective:
    A tester should be able to think like a user. In order to achieve end user satisfaction, testers have to think from the perspective of the user. They should be able to get into the user’s shoes and walks all around in them to identify how the users want to communicate and interact with the product. So great testers have the ability to understand their target audience and assess how their user base will be interacting with the application and then develop test cases and test plan to get complete coverage and get an application that performs optimally.
  6. Ability to ask questions:
    A tester-extraordinaire has to learn to ask questions…a lot of questions…even questions that might seem irrelevant to another without feeling awkward or uncomfortable. By learning to ask the right set of questions, a tester can understand the requirement, understand the changes that need to be incorporated and implement them, understand the bigger picture and define the scope of testing.
  7. Data analysis skills:
    Great testers don’t only write test cases but also have the ability read and analyze test data generated from a particular application. If they have identified a ‘non-reproducible bug’ then they should have the capability to analyze the test environment, the test data, the interruptions in code etc. to assess where the bug generated and fix it accordingly. They also need to analyze data generated from testing script execution during test automation to find loopholes and performance gaps and identify ways to increase testing productivity.
  8. Reporting skills:
    Testing demands a lot of reporting. Hence having good reporting skills, possessing the ability to report negative things in a positive way, write status reports for clients and especially say a lot, but in a succinct, crisp manner etc. should come naturally to a tester.
  9. Thirst for knowledge:
    To be a good, well anything, and especially a good software tester, one has to possess a thirst for knowledge. Knowledge, for a tester, does not end with mastering one scripting language but continues as they have to stay in step with the latest technological developments and automation tools to keep coming up with new ideas to test better.
  10. Be a great negotiator:
    Negotiating well should come naturally to a tester as they have to negotiate with different people at different stages of a project. They have to have the skills to convince developers (who usually are quite possessive about the code they develop) that there is a defect in the code, explain its impact and get the defect resolved.

Conclusion
Another thing that a tester has to have in abundance is ‘perseverance’. Only when a tester is patient enough to explore the software constantly to find bugs and make new improvements and take all the testing challenges and complexities in the positive spirit can he/she become a great tester. At the end of the day, we go by the motto, “When we move our focus from completion to contribution, life becomes a celebration”. Our best testers feel the same way!

Node.js–A Great New Way to Build Web Apps

The past few years have seen Node.js explode on the application development scene like a superhero. Despite its humble beginnings with Yammer and Voxer, Node.js managed to establish its authority quite fast and is now seeing great mainstream adoption with giants like Walmart and PayPal putting their trust in it. Netflix too moved its website UI to Node.js from Java. Once the underdog, Node.js has established its credibility and superhero status within the enterprise and has an increasing number of developers adopting it to build fast and scalable web applications with this open source cross-platform runtime environment. Ever since its launch, Node.js has been seen as this cool and trendy server side platform that managed to attract the developer community. The great thing about Node.js is that it’s core functionality has been written in JavaScript making it a great choice for developing real-time applications. Apart from this, Node.js is packed with a host of other features that make it ideal for building web applications. So what makes Node.js so great?

  • Neutral Language
    Since Node.js supports JavaScript, the same language can be used in the front end as well as the backend thus breaking down front and back end development boundaries which make the development process more efficient. Considering that JavaScript is a language that is used by a majority of developers it saves them the trouble of translating code and helps in managing development time and cost. With the Node.js, framework developers do not need to translate the logic from their head on the server side framework and also do not have to translate the HTTP data from JSON (the data-interchange format) to the server side objects. Since Node.js is generally understood both by Java and .NET camps it makes it easier for the developers to deploy it on both Unix and Windows infrastructures.Node.js uses Google’s V8 engine on Chrome which is written in C++ and has an exceptional running speed as V8 has the capability to easily compile JavaScript into native machine code without any problem.
  • Scalability
    Node.js effectively solves the concurrency problems that plague developers unilaterally. The problem of concurrency in server-side programming languages often causes poor performance and impacts the throughput and scalability of an application. Developers get an event driven architecture with Node.js and non blocking I/O API which takes care of these issues. Node.js is also built to handle asynchronous I/O’s from the base up. This helps in managing many web development related problems.
    Node.js also splits a single process into multiple processes called workers by adopting the cluster module. This modules allows developers to create ‘child processes’ that can function under the ‘parent process’ and can communicate with the parent Node by sharing the server handles and by using IPC (Inter-process communication).Furthermore, applications in Node.js are easier to scale since the developers write simple code and Node.js simply takes over from there. Instead of using processes and threads, Node.js uses a simple Event Loop and defined call backs where the server automatically enters the loop when the callback definition ends for scaling efficiently. Simply put, Node.js helps applications execute common tasks like reading or writing to the filesystem, reading or writing to the network connections or to the database with ease and speed and makes applications capable of managing a large number of simultaneous connections with high throughput.
  • Built-in support
    Node.js has an inbuilt support for package management using the NPM, the Node Package Manager, a tool that comes as a default with Node.js installations. The NPM module is publicly available, has reusable components and is easily available through an online repository of over 300,000 packages with dependency and version management. This ecosystem encourages sharing and is open to all and gives developers more prospects to create effective solutions by giving them the opportunity to update, share and reuse code with ease.
  • Great for real-time web applications
    Developing real-time web applications such as chats and gaming apps in Node.js is extremely easy. Developers do not need to concern themselves with things such as low-level sockets and other such protocols. It allows the developers to write JavaScript on both the server and client side and facilitates automatic data synchronization by automatically sending data between the client and the server and ensures that data changes on the server are immediately reflected at the required points.Applications in Node.js are composed of small modules that are piped together which ensures that, unlike monolithic applications, these applications do not creak under unseen weight and stress. This also makes adding new functionalities to the application much easier as the changes do not need to be made deep inside the codebase.

    Along with all this, Node.js can also be used as a proxy server if the enterprise does not have the infrastructure for proxy. It also allows for actual data streaming and does not take HTTP requests and responses as isolated events and reduces processing time. Node.js applications are also capable of dealing with high loads. In 2013 Walmart put their entire traffic on Black Friday through Node.js and ensured that their servers did not exceed 1% of server utilization despite having 200, 000, 000 users online.

    Node.js has all the features that make it most appealing to the developer community and also renders it enterprise ready – it’s easy to scale, is secure and easy to learn. It also takes care of low latency issues that plague most tech companies because of the asynchronous input-output operations feature. Adding it all together, it is clear that with Node.js organizations can achieve more as it half the number of developers can be used to build products. It also reduces the number of required servers to service a client and increases app performance by reducing load times by almost 50%. Given the increasing industry confidence on Node.js, it is quite clear that its future is indeed bright.

Software Testing Life Cycle

We all must be aware of the tree’s life cycle where a small seed goes through distinct phases to gradually grow and develop as a large tree.

The similar concept of life-cycle is also followed in the software engineering field, mainly in the development life cycle and the testing life cycle where former perceives the gradual development of the business or functional requirements into a software application and the latter one visualizes the testing of the software application from a scratch to the release of the quality software application. Since, our article is not concerned to development life-cycle, we will discuss about testing life cycle only.

What is Testing Life Cycle?

Development life cycle is followed by the testing life cycle. A testing life cycle comprises of several phases and activities aligned in a sequential manner to initiate, execute and terminate the testing process.

A software testing process could be initiated as soon as the development process begins and may be carried out in parallel to the development activities. It can be understood through V&V development model, where a corresponding test methodology is defined for each development phase.

Now, coming back to the testing life cycle, it mainly consists of following phases in a subsequent manner.

Let’s find out what each phase consists and is responsible for.

1. Requirement Analysis:-

The very first phase of the software testing lifecycle involves the study and analysis of the available requirements and specifications. Both functional and non-functional requirements are being viewed and study from the testing point of view, to find out the testable requirements i.e. those requirements which may produce results on feeding with the input data.

  • On the availability of requirements and specifications.
  • When the application architecture is available.

Activities

  • Brainstorming sessions for the requirement analysis and feasibility.
  • Identifying and sorting out the requirement priorities.
  • Creating the requirement traceability matrix (RTM).
  • Identifying the suitable test environment.
  • Identifying the requirements acceptable for the automated testing and the manual testing.

Responsibility

Requirement analysis stage visualizes the combined efforts of QA team, project manager, test manager, system architect, business analyst, client and the major stakeholders so as to have greater understanding of the requirement and subsequently the better outcomes.

Outcomes

  • Testable Requirements.
  • Requirement Traceability Matrix(RTM)
  • Automation feasibility report (if applicable).

2. Test Planning:-

With the information gathered about the requirements in the previous phase, QA team move a step ahead in the direction of planning the testing process. Basically, a strategy or strategies is/are defined and described for the testing process/activities.

When to go for it?

  • On the successful completion of the requirement analysis phase.
  • When the testable, refined and clear requirements got defined and specified, i.e. on the availability of requirement documentation.
  • Good understanding of the product domain.
  • Availability of Automation feasibility report (if any).

Activities:

  • Scope and objectives are outlined.
  • Deciding the testing types to be performed along with the specific strategy for each of them.
  • Roles and Responsibilities are determined and assigned.
  • Identifying the resources and testing tools required for the testing.
  • Estimating the time and the efforts to carry out the testing activities.
  • Defining and detailing the test environment.
  • Defining the time schedules.
  • Entry, exit criteria along with the suspension and resumption criteria is defined.
  • Planning the training activity and sessions required by the testers(if any).
  • Risk analysis is being done.
  • Change management process is specified and described.

Responsibility:

As per the requirement and the availability, QA Manager or QA lead is accountable for planning the testing process.

Outcomes:

  • Test Plan documentation
  • Time and effort estimation documentation.

3.Test Case Design & Development:-

The requirements has got analysed and accordingly the QA team comes out with a test plan. Now, it’s time to do some creative work and to give a shape to this test plan in the form of test cases. Based on test plan and detailed requirements, test cases are designed and developed for the purpose of verifying and validating each and every requirements specified in the documentation.

Activities:

  • Test cases are designed, created, reviewed and approved.
  • Relevant existing test cases are reviewed, updated and approved.
  • Automation scripts (if any) are developed, reviewed and approved.
  • Relevant test data are generated or imported from the development environment.
  • Test conditions along with the input data and expect outcome for each test cases are defined and specified.

Responsibility:

Generally, the testers have the job of writing the test cases under the supervision of QA lead or QA manager. However, the testers may be accompanied by the developers in generating the effective automation test scripts.

When to prepare/create test cases?

  • On the availability of software requirement specification (SRS) and business requirement specification (BRS).
  • When the test plan is ready.
  • Automation feasibility report(if any) is available.

Outcomes:

  • Test cases including automation scripts.
  • Test Coverage Metrics.
  • Test Data

4.Test Environment Setup:-

The software testing process needs an appropriate platform and environment encompassing the necessary and required hardware and software, to create and replicate the favourable conditions and intended environmental factors to perform actual testing activities i.e. execution of the developed test cases on the software.

  • Test data is set up.
  • Test environment checklist is prepared and the required hardware and software are aggregated.
  • Test server is setup and network settings are configured.
  • Test Environment management and maintenance process is defined and described.
  • Smoke testing of the environment to check is readiness.
  • Testers are being equipped with the bug reporting tools.

Responsibility:

QA team under the supervision of QA manager sets up the test environment

When to set up Test Environment?

  • When test data is ready for use.
  • Test Plan documentation is available.
  • Needed resources such as hardware, software, testing tools & framework, server, etc. are available.

However, the test environment set up phase may be carried out concurrently with the test case design & development stage.

Outcomes:

  • Test Environment is set up and ready to execute tests.
  • Smoke Test Results.

5.Test Execution:-

With the test cases, test data and the suitable test environment, QA team is now ready to try hands on some actual testing activities. The test execution phase involves the execution of the developed test cases with the help of test data in the set up test environment.

  • Test Cases execution as per the test plan.
  • Comparison of actual results with the expected outcomes.
  • Identifying and detecting defects.
  • Logging the defects and reporting the identified bugs.
  • Mapping defects with the test cases and accordingly updating the requirement traceability matrix.
  • Re-testing, once a defect gets fixed or removed by the development team.
  • Regression testing(if required).
  • Tracking a defect to its closure.

Responsibility:

Test Engineers are deployed to carry out the task of test case execution.

When to go for the test execution?

Being equipped with the test strategy, test plans, test cases, test data, properly configured and set up test environment along with some other needy resources, the QA team can kickoff the test execution process.

Outcomes:

  • Test Status and results.
  • Bug or Defect Report.
  • Complete and updated Requirement Traceability Matrix (RTM).

6.Test Closure:-

The completion of the test execution phase and delivery of the software product marks the beginning of the test closure phase. This phase perceives the meeting and discussion amongst the QA team members with respect to test execution and its results. Apart from the test results, other testing related parameters are considered and reviewed such as quality achieved, test coverage, test metrics, project cost, adherence to deadlines, etc.

Activities:

  • Retrospection of the whole testing process.
  • Test Life Cycle exist criteria is evaluated along with some other essential aspects such as test coverage, quality achieved, fulfilment of goals and objectives, critical business goals, etc.
  • Need to change the exit criteria, test strategy, test cases, etc. are discussed.
  • Test Results are analysed and reviewed.
  • All the test deliverables such as test plan, test strategy, test cases, etc. are collected and maintained.
  • Test Closure Report and test metrics is prepared.
  • Defects are arranged severity wise and priority wise.

Responsibility:

Generally, the QA lead or the QA Manager is responsible for preparing the test closure report.

When to perform test closure activities?

Generally, the test closure activity begins after the completion of test execution activities and delivery of the software product. However, it is not necessary to carry out the closure task only after the delivery of the software application. It may be performed after closure of the testing activities due to some other reasons such as achievement of targets, cancellation of the project or when the product needs update, etc.

Outcomes:

  • Test Closure Report.
  • Test Metrics.
  • Learned process.

Conclusion

In nutshell, it may be concluded that similar to development life cycle, testing life cycle also consists of several phases and each phase counts a large number of activities to strategically and orderly carry out the testing process in an effective and efficient manner and subsequently ensuring maximum productivity and quality achievement.

Keeping Your Testing and Automation Strategy Relevant

“Evolution is the secret for the next step” – Karl Lagerfeld

The need to change and the ability to adapt to change has been the reason why today, we have grown so much. Without going into the details of human evolution, which will not find relevance in this piece, we want to mention the evolution of technology and how phenomenally it has grown over the last few decades. This growth is only because someone, at some time, identified a ‘chance’ of growth…of doing something better. Bringing about all this change was not easy, yet it was essential and imperative in order to stay relevant.

how to keep your test automation strategy relevant

Just like everything else, change is also essential for a Test automation strategy to stay relevant. Considering the dynamic business environment that we have today where technology changes and advancements are the norm, frequent product upgrades and product evolution are inevitable in order to stay relevant and ahead of the curve. Keeping this in mind, it is imperative that in order to ensure a high-performing and flawless product, having a strong testing strategy is a must. Since the consumer does not have the time, energy or bandwidth to deal with a product riddled with bugs that lead to slow performance, testing assumes an even more important role. For testing professionals, thus, this means building a testing suite that can enable this change.

  1. Much like testing an inventory, testing professionals too, have to look at the overall test strategy as well as the detailed test plans and test cases to identify which test plans will remain relevant in the long run and which plans will become obsolete. Taking this big picture view, with an eye out for the details, on a regular basis thus becomes essential to release upgraded products that are bug-free and with development costs under control.
  2. When there is a product upgrade, changing the entire test strategy can become a problem that can snowball into a big expense. Much like product development, having a monolithic test plan with many interconnected parts only slows down the process, since if one test fails the entire testing suite comes to a halt. Having smaller and independent test cases addresses this problem and increases the efficiency of the testing suite.
  3. One more key element is the test data being considered. As the product evolves, the conditions it operates under will change and the testing has to address those changed conditions. Creation of test data to assess which tests can do so, getting frequent data dumps from the production team, assessing how the test can be spread the tests to other environments, using data dumps from production to access relevant test data are some issues that test automation suits should cover to ensure pertinence.
  4. Then the big item – test automation! What must be considered is the level of automation incorporated into the overall testing strategy. There has to be a healthy balance of manual and automated testing. Test cases that have to be repeated continually, cases that are manually time-consuming and need a speed of execution or cases that are difficult to perform manually are ripe for automation.
  5. One way to ensure continued evolution of the automation suite, in line with the evolution of the product it is testing, is to consider the test automation suite too as a product; one that needs frequent iterations and upgrades. Keeping this relevant starts with selecting the right testing tool, designing the framework and features, test bed preparation, scheduling and timeline management and iterating the deliverables of the testing automation are some of the key contributors of a successful test automation strategy that stays relevant. Along with this, testers have to focus on the maintainability of the testing automation suite so that when the product changes, your test automation suite can adapt to that change and deliver what is expected with minimal effort. Just as software needs to be maintained, a test automation suite, too, needs maintenance. Thus, treating your testing assets like any other piece of software becomes a critical contributor to the relevance of the test automation suite. Charting the lifecycle of the testing suite, much like software maintenance, and identifying maintenance needs such as preventive maintenance, corrective maintenance, and adaptive maintenance are important for the longevity of a test automation suite.
  6. Creating test automation suites that anticipate, or allow, changes in the UI also ensures that the test automation suite can work with future versions of the product. To make this happen, some of the things that testers can do are build test suites that divide the test into individual parts, allow keyword-driven testing and support multiple scripting languages amongst others. What testers need to bear in mind at all times is, that for dependable test automation suites assessing the validity of the testing suite with each product iteration reduces the burden of test maintenance. Adding the needed tests and removing the ones that are redundant on a proactive basis after each product release increases the life of the test automation suite.

Conclusion

By building a strong test strategy that can remain relevant for a long time, testers provide developers the confidence to refactor legacy code and build solid and stronger products. Building a strong test strategy or a robust automation suite is a labor of love for testers that needs a lot of thought and nurturing. Once that is achieved, the test automation suite and the tester want nothing more than to live happily ever after.

Made For Each Other – How a Dating Site Leveraged the Power of Test Automation?

Over the years, test automation has become an indispensable part of the product development strategy of most companies. In these days of extreme pressure to go-to-market faster, it seems no test strategy is complete without an automation component. This is the story of one of our key customers, a (or maybe THE) leading dating and matchmaking site in the game today, and how they gained from adopting a sustained, strategic and comprehensive test automation strategy – oh, and of how we helped them get there.

Our story starts when an updated version of the dating site was in the works. A consumer internet site like this operates under some fairly extreme conditions. Getting your product out into the hands of the target users at a pace faster than the competition is vital. Then there is the need to provide an incomparable, error and trouble-free user experience – in this social age, even the smallest problem could cause users to switch to alternatives. So the name of the game is fast, extremely high-quality product development.

When we entered the stage our client was facing a dilemma because of the limited number of QTP licenses they had. The choice was to either take much longer to run their 8000+ scripts with the existing number of licenses or to buy additional licenses. In the first case, they stood to lose a possible market opportunity because of the extra time the release would take, and the second meant a significant expenditure. Neither were very palatable options.

We approached the problem differently. We suggested a shift to Selenium. Being an open source framework for test automation with wide acceptability in the market, this made sense from the cost point of view definitely. Of course, opting for an Open source option like Selenium had is pros and cons with some possible compromise on features, support and possibly a loss of accumulated knowledge. The migration of 8,000 QTP scripts to Selenium, was also a massive task in itself. The major challenge was that there was very limited time to complete the conversion and every day spent on making this effort meant adding costs. The capabilities of the manual testing team, a substantial part of their workforce, the effective and efficient channelization of their efforts and the best utilization of their time over the period of transition was also a big concern.

Our solution was to leverage our own Krypton, a feature-rich, hybrid test automation framework which is a scriptless test-automation tool. A glimpse at some of these features – parallel test execution is possible within Krypton, a sure shot time saver. Krypton also supports all the browsers in the market and it also features keyword driven testing, automated reporting capability, and parallel recovery. Using Selenium as the base and Krypton, the team managed to complete the migration and create the brand-new test automation framework.

Now that this phase of the effort has been put to bed we can look back at the results with pride. For one, the automation framework delivered on its promise of achieving greater code coverage in a relatively short span of time. This means a better-tested product going out into the market faster.

Krypton has been designed in such a way that it was relatively easy for the manual testers to understand. The analysis at the end showed that getting them working with this tool after training was achieved within a very short span of 2-4 weeks, a crucial aspect in the timely delivery of the product.

By using this solution, the company was able to achieve tremendous savings in terms of license costs and the efforts of the manual testing resources – almost $3 million by their own estimates.

Today, the new framework is implemented as the preferred choice for automated testing. It has helped the customer to increase the efficiency, effectiveness, and overall coverage of the testing effort. The fact what we are most thrilled about is the impact we have managed to deliver in such a high-pressure situation. As far as this dating site is concerned, perhaps Krypton was that special one it had been waiting for, all its life and now, as a result – Love is all around!

Strategies for Testing a Minimal Viable Product(MVP)

Strategies for Testing a Minimal Viable Product

Creating a Minimal Viable Product gives entrepreneurs the opportunity to test a product idea and assess the validity or invalidity of their business plan. The heart of the Lean Startup methodology, an MVP, is little more than a rough draft, an outline sketch of a product. However, an MVP, is, under no circumstances a half-baked product. It is instead a process through which entrepreneurs assess what their customers actually demand in their product versus what they feel that the product should do. Developing an MVP is about answering some rudimentary questions that stem from theoretical inquiry of “Should this product be built?” or “Can we build a sustainable business around this set of products and services?” and goes on to developing the ‘build-measure-learn’ feedback loop that tests the assumptions regarding the product by putting the rough draft in front of the users. A great number of start-ups favor the MVP approach to software development as they can communicate their product to their target audience, gather feedback fast and iterate the product according to that feedback.

Considering that the focus and aim of a Minimum Viable Product is to remain, well, ‘minimum’, sometimes companies developing such products are unlikely to give too much emphasis to testing. Since MVP’s have a limited objective performing elaborate tests on them seems like a waste of time and resources. However, at the same time, we need to note that in order to gain the validation of the customer the product has to pass from one test level to another. Thus, having a test plan for an MVP too is important.

A basic test plan could comprise both automated and manual tests. We have written in the past, how considering the fact that MVP development does not lend itself to long-term planning, dedicating time and resources to develop a strong test automation strategy seems like a waste. Given that the aim of the MVP is to build the leanest possible feature set to address the core demand of the final product that meets the user criteria the final product might turn out to be quite different from what was initially envisioned. The automated tests that were developed as a part of the test suite thus might be rendered completely useless in the event of these product iterations. So what should an MVP test strategy contain?

Writing elaborate Unit Tests for an MVP may not be required. Since the MVP is open to frequent iterations, validating that each unit of the software designed performs as it should and build the confidence in the written code is not required. However, we also cannot entirely dismiss unit testing for MVP. Running a few Unit Tests once iterations have been made to the code to see if there are some defects that emerge in product functionality and usability owing to the change in code works in favor of the product.

Along with this, it makes sense to conduct some middle-tier tests to ensure that the data is being delivered to the other tiers in the desired format. Since it is not essential to test individual components when developing an MVP, testing the module as a whole to verify the expected outlook and check the usability of the product makes better sense. A quick round of integration testing to verify and validate the end to end functionality of the connected components also helps in delivering a sound, yet basic MVP.

UI testing perhaps is the most important test for an MVP. Since UI tests check how the application works with the user and assesses if all the functionalities of the product are understandable and easy to use. It also assesses if the user can navigate seamlessly through the product without stumbling upon bugs etc. and assess the possibility of errors on various interactions that occur during the product use. Considering that the average user is more concerned about the usability of the product more than its underlying structure, UI testing of MVP becomes all the more important.

Both the developer and the user know that the MVP is a version that has been put out solely for the purpose of market validation. At the same time, you need to put out a relatively ‘sound’ product in front of your target audience to get the feedback that holds value which will eventually lead to the development of an elaborate and dependable product. To make sure that this happens, startups, entrepreneurs and other organizations looking to develop MVP’s have to put some amount of focus on testing. Taking a more global approach to testing and allocating a designated time to do so will only help in developing a product in alignment with the initial vision that might be minimum, but in no way will it be poor.

Ruby On Rails vs. PHP

PHP and ROR are two very widely used and in-demand programming languages. Both of these languages are dynamic, are very flexible and fun, concept driven and easy to learn. This means that you spend less time learning the details and focus more on learning the programming concepts which eventually help the developer build applications faster. Both, ROR and PHP are open-source programming languages and have been around for quite some time to prove their stability. While PHP had not been too frequent with their upgrades, over the past two years they have had some major releases that have spiraled their popularity even more. Presently, PHP holds 20.1% of the market share while ROR stands at a close 18.91%. However, in 2016 ROR adoption is picking up greater speed when it jumped seven points from last year and secured its highest ranking every in the TIOBE index.

Here, we try to take a close look at both to determine which is to be used and when.

PHP
PHP is a generic Object Oriented Programming language that is simple to learn and easy to use. It has a very large community of developers and users and provides extensive database support. There are a great number of extensions and source codes available in PHP and can be deployed on almost all web servers and works on almost all operating systems and platforms. PHP also allows for the execution of code in a restricted environment, offers native session management and extension API’s. Deploying a CMS in a PHP application is phenomenally simple because of the sheer number of frameworks, libraries, and resources at its disposal.

Deploying a PHP application is also a very simple process. You can simply FTP the files to a web server or deploy it equally easily using Git without worrying too much about the web stack. The entire PHP framework directly can be easily copied onto the server and run when using frameworks like Code Igniter.

PHP also has a huge web focus and. While it is a server-side scripting language and can also be used as a general purpose programming language it essentially has a huge web focus and seems to be born for it. It has a high degree of extensibility which renders it easy for customization in the web app development process. PHP has also addressed previous issues like object-handling and improved the basic object-oriented programming functionality in their upgrades. The latest PHP 7 release boasts of explosive performance improvements, drastically reduced memory consumption and easier error handling amongst other features.

Since programmers could manipulate the code to suit their requirement, the evolution of PHP led to a lot of bad code. As the coding standards improved the code became more verbose making it suitable for enterprise usage. PHP is still the go-to language for building web applications and web development because of its ability to interact with different database languages but still remains unsuitable for desktop applications. PHP is also a great resource to create dynamic web pages and also to create internal scripting languages for projects. Some of the big names using PHP presently are Facebook, NASA, Zend, Google etc. Wikipedia says that PHP is installed on over 240 million websites and approximately 2.1 million web servers.

ROR
Ruby, the programming language that runs with ROR or just Rails, is heavily influenced by Perl, Eiffel, and Smalltalk. It is a full-stack web application framework which is object oriented, has a dynamic type system and automatic memory management. ROR is a mature framework that enables high-quality products that are can be maintained easily. It works on multiple platforms, offers a Very High-Level Language (VHLL), has advanced string and text manipulation techniques and can easily be embedded into Hypertext Markup Language (HTML). The ROR framework is extremely automated that allows the programmer to focus on solving the business problem that needs addressing instead of spending time working around the framework. The Generators/Scaffolding and plug-in assets accelerate the development process and make maintenance a lot easier as compared to PHP. The ActiveRecord ORM in ROR is extremely straightforward to use. Additionally, it has integrated testing tools and is Object-Oriented right from ground up with a concise and powerful coding structure. Rails also supports caching out of the box which makes it easy to scale, unlike popular belief.

However, unlike PHP, ROR has a comparatively steeper learning curve and is not quite easy to run it in Production mode. ROR is also more difficult when it comes to errors as in Ruby instead of throwing up an error message the entire app just blows up.

Having said this, while ROR might not be easy to learn, it has better security features, a flexible syntax debugger and comes off as a more powerful language than PHP. It won’t be too off the mark to say that, while learning Ruby can be difficult this is a language meant for the ‘thinking developer’ and offers a superior toolset for application development.

ROR is being used by AirbnbGitHub, Groupon, Shopify, Google Sketchup, BaseCamp, SoundCloud, Hulu etc. Rails also makes an excellent choice for web apps, highly scalable websites, enterprise applications, and for projects that need rapid web development. However for single page applications, dynamic content and games, high traffic and high usage platforms like chat rooms, ROR might not be the best choice.

Conclusion
So which one is better – PHP or ROR? To begin with, it wouldn’t be fair to compare the two languages since Rails is a framework for Ruby while PHP is a language and also has many frameworks. However, both these ecosystems are efficient and powerful in their own right. Sometimes the selecting one over the other becomes a matter of personal preference , availability of skills and the specific business case.

Key considerations on Big Data Application Testing

2016 is emerging as the year of Big Data. Those leveraging big data are sure to surge ahead while those who do not will fall behind. According to the Viewpoint Report, “76% (of organizations) are planning to increase or maintain their investment in Big Data over 2 – 3 years”. Data emerging from social networks, mobile, CRM records, purchase histories etc. provide companies with valuable insights to uncover hidden patterns that can help enterprises chart their growth story. Clearly, when we are talking about data, we are talking about huge volumes that amount to almost petabytes, exabytes and sometimes even zettabytes. Along with this huge volume, this data which originates from different sources also needs to be processed at a speed that will make it relevant to the organizations. To make this enterprise data useful, it has to be projected through the users via applications.

As with all other applications, testing forms an important part of Big Data applications as well. However, testing Big Data applications has more to do with verification of the data rather than testing of the individual features. When it comes to testing a Big Data application, there are a few hurdles that we need to cross.

Since data information is fetched from different sources, for it to be useful, it needs live integration. This can be achieved by end-to-end testing of the data sources to ensure that the data used is clean, data sampling and data cataloging techniques are correct and that the application does not have a scalability problem. Along with this, the application has to be tested thoroughly to facilitate live deployment.

The most important thing for a tester, testing a big data application thus becomes the data itself. When testing Big Data applications, the tester needs to dig into unstructured or semi-structured data with changing schema. These applications can also not be tested via ‘Sampling’ as in data warehouse applications. Since Big Data applications contain very large data sets, testing has to be done with the help of research and development. So how does a tester go about testing Big Data applications?

To begin with, testing of Big Data applications demand the testers to verify the large volumes of data by employing the clustering method. The data can be processed interactively, real-time or in batches. Checking the quality of data also becomes of critical importance to check for accuracy, duplication, validity, consistency, completeness etc. We can broadly divide Big Data application testing into three basic categories:

  • Data Validation:
    Data Validation, also known as the pre-Hadoop testing, ensures that the right data is collected from the right sources. Once this is done, the data is then pushed into the Hadoop system and tallied with the source data to ensure that they match in this system and are pushed into the right location.
  • Business Logic validation:
    Business logic validation is the validation of “Map Reduce” which is the heart of Hadoop. During this validation, the tester has to verify the business logic on every node and then verify it against multiple nodes. This is done to ensure that the Map reduce process works correctly, data segregation and aggregation rules are correctly implemented and key value pairs are generated correctly.
  • Output validation:
    This is the final stage of Big Data testing where the output data files are generated and then moved to the required system or the data warehouse. Here the tester checks the data integrity, ensures that data is loaded successfully into the target system, and warrants that there is no data corruption by comparing HDFS file system data with target data.

Architecture Testing forms a crucial part of Big Data Testing as a poor architecture will lead to poor performance. Also, since Hadoop is extremely resource intensive and processes large volumes of data, architectural testing becomes essential. Along with this, since Big Data applications involve a lot of shifting of data, Performance Testing assumes an even more important role in identifying:

  1. Memory utilization
  2. Job completion time
  3. Data throughput

When it comes to Performance Testing, the tester has to take a very structured approach as it involves testing of huge volumes of structured and unstructured data. The tester has to identify the rate at which the system consumes data from different data sources and the speed at which the Map-Reduce jobs or queries are executed. Along with this, the testers also have to check the sub-component performance and check how each individual component performs in isolation.

Performance testing a Big Data Application needs the testers take a defined approach that begins with:

  • Setting up of the application cluster that needs to be tested.
  • Identifying the designing the corresponding workloads.
  • Preparing individual custom scripts.
  • Executing the test and analyzing the results.
  • Re-configuring and re-testing components that did not perform.optimally.

Since the testers are dealing with very large data sets that originates from hyper-distributed environments, they need to make sure that they verify all this data faster. To enable that, testers need to automate their testing efforts. However, since most of the automation testing tools are yet not skilled enough to handle unexpected problems that could arise during the testing cycle and the absence of a single tool that can perform the end-to-end testing, automating Big Data application testing requires technical expertise, great testing skills, and knowledge.

Big Data applications hold much promise in today’s dynamic business environment. But to appreciate its benefits testers have to employ the right test strategies, improve testing quality and identify defects in the early stages to deliver not only on application quality but cost as well.