How to Decide on the Best CMS for Your eCommerce Site?

It has never been easier to build an online presence than today. Even online stores today are mimicking the experience of the brick and mortar stores with product displays, and images very close to the real thing. eCommerce merchants understand that the online shopping experience has to go beyond the simple product browsing and shopping cart functionalities. In order to keep today’s informed customer engaged, they have to use informative content to make their online store interesting to shop in. While the product display is important, it is equally important to have great content to complement the product for better customer engagement. The content becomes the primary spokesperson in an eStore. Retailers such as Marks & Spencer and bike retailer Wiggle are showing how to engage with their customers by giving them the relevant advice that they need. In order to do so, they have to employ a powerful Content Management System(CMS) to deliver consistent experiences. However, CMS is not exclusively about content. It also needs to deal with a range of complex functions without compromising on usability for those in charge of managing and updating content. The question then is, with a plethora of CMS options out there, how can you ensure that you are making the right choice?

Why use a CMS?
A CMS offers eCommerce sites the power of scalability, flexibility, extensibility, reliability, and security. As an eCommerce site grows it needs to store the increasing volume of content in a database in an organized manner so that it can be manipulated easily. As the sophistication of eCommerce consumers increases, eTailers must ensure that the content changes according to the visitor and should ensure that dynamic content can be woven into static content without compromising on the UI and display. eCommerce providers also need to ensure that the site is secure and runs reliably. Additionally, they also need to ensure that they can improve site performance while adding add-ons. All these things can be managed easily using a powerful CMS.

How to decide which CMS to use?
There are a number of factors that contribute to the CMS choice. Some of these are:

  1. Business Goals:
    As with every other aspect of the business, when making a CMS selection, it is imperative that you keep the business goals in mind. You must take stock of the target audience and multichannel demands and identify internationalization and language needs to ensure that the site displays correctly. It is important to assess future trends and define how you want the online business to grow. Assessing the existing information management practices and having a ready checklist of the desired functions and features also helps to fine-tune the CMS selection process.
  2. Technical Knowledge:
    When exploring CMS solutions take a look at the technical competency of the team who will be using and managing it. There are CMS solutions that are apt for people who are proficient in CSS and HTML. At the same time, there also are CMS offerings for people with limited technical knowledge or even those who have no idea about coding and despite this the CMS allows them to customize the website easily.
  3. Feature Assessment:
    All CMS platforms come with their own set of features that are either built-in or can be added using add-ons or plug-ins. Assessing the kind of features your eCommerce site demands, understanding which of these features will differentiate you from the competitor, what eCommerce capabilities does the CMS platform offer, ease of adding plug-ins for added functionalities etc. are just a few of the CMS features to consider. It’s also key to assess the automated functions and processes such as stock control, invoice generation, order monitoring capabilities, product views, catalog management, cross-selling or upselling capabilities, payment and delivery management etc. of the CMS options at hand before making a decision.
  4. Customization Capabilities:
    Does the CMS under evaluation offer the eCommerce portal the power of customization and personalization? The CMS solution should offer interactive elements such as quizzes, feedback forms etc. and have the capability to automatically tie these into the customers’ experiences. It should also be able to auto-generate content based on user preferences in on-page locations, be able to refine and streamline displays, send emails and updates in response to user behaviors etc. The CMS should also offer one-to-one marketing capabilities for better personalization.Additionally, the CMS should offer its users the capability to create group permissions, directly edit code, create custom forms, without impacting the entire system negatively. These were areas that previously were under the exclusive control of the developer. The CMS should also integrate easily with third-party applications to enable competitive advantage.
  5. Technology Demands:
    When selecting a CMS, it is essential to flesh out the technical demands of the site and evaluate its compatibility with the existing technology stack in use. For example, if the eCommerce site is built using PHP and you have a team proficient in that language, choosing a CMS that works on a PHP platform would make more sense than say, choosing Demandware. It also makes sense to see if the CMS platform offers an integration with the existing ERP or warehousing solution etc. to reduce overhead costs.
  6. Deployment Infrastructure:
    Take into account the deployment infrastructure when choosing a CMS. A cloud-based CMS does not demand any IT infrastructure investment and allows eCommerce portals to focus on the business. At the same time, when looking at the infrastructure, it makes sense to take a stock of the traffic and bandwidth demands, the time taken for backups and updates etc. and ensure that the CMS provides the right support so that the site performance does not suffer due to latency on CMS deployments.
  7. Speed, Scalability, and Flexibility:
    Identifying how fast the CMS can render content, if it can operate in different environments or if it is OS specific, can it handle huge spikes in traffic and scale easily, can you create multi-server environments that mirror each other for load balancing etc. become key points of assessment. Then there is the speed of deployment -assess how easy the CMS is to install, setup and configure and identify the time it would take to do that and whether that meets your needs.
  8. Architecture Flexibility:
    Does the CMS under evaluation offer control over the templating system? Assessing the kind of control the CMS offers is important as it helps the users create a unique brand identity easily. Control over the templating system could be simple like editing existing templates easily and can extend to working outside of a template structure altogether. A CMS that offers architectural flexibility provides development freedom to its users from a set template structure and helps them create a differentiated design experience for their clients.
  9. Mobile Support:
    Today smartphones account for 45.1% of all eCommerce traffic. By the end of 2017, this number is expected to cross 60%. When making a CMS choice, it, therefore, becomes absolutely imperative to see that it offers responsive mobile support. This will allow you to reach the customers seamlessly irrespective of the device they are using. It should also offer API connectivity so when you choose to develop a mobile app, the application can connect easily with the CMS platform data making it easier to implement.

That’s already a pretty long list – and it’s not done yet. When making a CMS choice, evaluating the security features it offers is a must to ensure that data integrity or bug attacks do not affect site performance. Then there are factors like developer and custom application support, ease of upgrades, and many more to factor in based on your specific needs. The choice is not easy – mainly because it is such a critical element of your eCommerce success. When faced with this decision we hope this post will help you draw up your own evaluation criteria – and if you need help then do not hesitate to ping us!

Moving from Quality Assurance toward Quality Assistance

“Prevention is better than cure”– The phrase is applicable to almost each and everything; living or non-living entities, including software development and software quality assurance.

Let’s begin with Quality Assurance:

Quality assurance, an inherited process of the software development life cycle is used to evaluate and improve the quality of the software product to fulfil the user’s needs and expectation.

Most of the software development companies have same modus operandi for developing the software products, i.e. development followed by the testing, and subsequently the product release. But, if the procedure is correct & appropriate to follow? At first instance, it might seem to be valid and relevant approach towards developing a software product as it’s an established and standard way of executing the software development project. But, actually the methodology elongates the whole development process and makes it more complex to execute. Rather, finding defects-post development through QA procedures to improve product’s quality, it would be better to ensure the quality of the product during the development phase, itself. Shift from quality assurance to quality assistance is all about that.

Quality assurance works as a gatekeeper to pass on the flawless product from organization’s door to user’s hand. QA encompasses multiple methodologies and activities including testing to ensure the product’s quality, but if it assures the quality? ….. Confused?? Let me outline the purpose of QA in software development. QA is used to verify and validate the product’s quality against the available or specified requirements and specifications, which means it restricts the scope of evaluating the product to given requirements and specifications only. If the different requirements are getting fulfilled by the product, it is meant to be acceptable for the release but may not guarantees the quality. QA engineers are just following the sheep-walk and are being compelled to accustomed approach of testing the application, keeping in account the project deadlines, cost and instructions from their seniors and managers. Without applying or implementing your vision, logical & analytic thinking, approaches, and just adhering to orthodox standards of testing the application as instructed by the managers might not fully assure the quality of the product.

A tester might be good in his/her work, but if he/she could be assured or guaranteed of quality work being performed or to be carried out by the different teams involved in the development.

So, what next; Quality assistance?

Quality Assistance…….a new term? No, but an alternative name to quality assurance with different structure and means to ensure + assure the software quality.

Quality assistance is a revolutionary approach to contribute towards the development of quality-rich software products. In quality assistance methodology, instead of being a part of QA/testing phase to evaluate the product, testers are brought into development to assist developers in building up the quality of the product.

It is pertinent to mention that a software product under development needs to be monitored, controlled and maintained since day 1 of the development in order to ensure its quality. But, how the task of monitoring, controlling and maintaining would be executed by the developer who is very much unaware of the quality domain, and only relies on the requirements and specifications to develop the intended software application. This deficiency justifies the involvement of testers in the development to help developers in gaining and maintaining the quality.

The role of a tester is to examine, explore, visualize, study, research and analyse requirements, specification and other related parameters for different quality aspects, and subsequently conveying and making developers aware of relevant and useful data and information, to come up with the quality product.

In layman language, developers will be developing and testing the software product, with the help of skills, training and knowledge imparted by the fellow testers.

Benefits of Quality Assistance:

  • A proficient developer with the knowledge and understanding of quality aspects will surely add-on to the product’s superior quality and improve the efficiency of complete software development life cycle.
  • A developer skilled in quality assurance will ensure the flawless product for the release, which may reduce or nullify the probability of occurrence of low-level and even mediocre defects at later stage.
  • This gives testers to detect and work on high-level of defects in a much dedicated and attentive manner.
  • Reduced role of testing phase will shorten the delivery or release cycle.

Challenges in Quality Assistance:

  • Imparting training to developers on quality assurance could be a cumbersome task for the QA engineers.
  • Although, developers may be updated with the knowledge & understand of quality assurance, still they will not be able to acquire the level of expertise, a tester would have. This may lead to missing out of or undetected critical production bugs, which may affect the whole project at a later stage.
  • Lastly, shifting from existing QA process to quality assistance would be a difficult and complex task for the organization to train, manage and streamline the teams to successfully adapt and function accordingly to this new change.

 

Transition from Quality Assurance(QA) to Quality Assistant(QA):

moving from quality assurance to quality assistance

*Blitz testing involves the participation of all teams to evaluate & assess the different features of the product, and provide their relevant feedbacks and review over it.

#Dogfooding technique involves the internal deployment of the product may be in a beta form to verify and validate the product’s features.

 

Note:

Usage of Blitz testing and dogfooding technique reflects the lack of confidence on the developer’s testing.

Conclusion:

In nutshell, it may be inferred that unlike quality assurance process, where QA team is solely responsible for the release of quality product, quality assistance involves the engagement & contribution of each & every team towards the improved quality of the software product.

The Business Case for Startups to Outsource Software Development

Skype, Klout, GitHub, Basecamp, MySQL are just a few examples of startups who successfully outsourced their software development and grew to become billion dollar organizations. Why did they follow this path, can your startup go the same way?

“Alone, we can do so little, together we can do so much” – Hellen Keller

With technological advancements and the rise of digitization, the world has become smaller and increasingly interconnected. Add new emerging markets and the rise of a skilled workforce and the case for companies looking to outsource software development becomes quite strong. There has been over a period of time a lot of discussion over whether a startup should outsource software development – some say that it is hard to find reliable vendors. Other say managing timelines and an offshore team poses as a challenge. Though these concerns are not unfounded it is also true that once you find the right outsourcing partner there are some clear benefits to be had.

Startups are always walking a tightrope. With limited resources, both financial and human, it does make sense to focus on the core business and outsource the rest. It is also a reality that the software product development environment is in a constant state of flux. The ‘it’ technology of yesterday could be no longer viable for the product that you are trying to create. Platform demands keep changing. Development methodologies evolve… Startups doing product development in-house may get dragged into the many operational aspects of the development and the other aspects of building the business such as identifying markets, business opportunities or revenue sources can get lost.

Let’s face it, the proof of the pudding for a startup will lie in the end product. And who makes a great product? A crack team of technical professionals. It’s by no means certain that a startup will be able to find, hire, and retain such top talent – the founders apart, of course.

So, the fundamental grounds for startups to outsource their software development is apparent. What are the actual benefits that they can reap from doing so though?

  1. Lower Development Costs:
    Hiring a team of experts can cost quite a pretty penny. Plus, the buck doesn’t stop with hiring an in-house team. You also have to spend some time investing in the right infrastructure, building processes and delivery methodologies and in training. By outsourcing, a startup can almost reduce its development cost by half since they do not need to incur any of these expenses.Even in the rank and file developers, cost advantages can accrue. Labour arbitrage has traditionally been one of the accepted advantages of outsourcing. Research from Aberdeen Group shows that outsourcing software development activities cost approximately “30- 65% less than in-house development initiatives.”While being face to face with your developers seems good, it is no longer a necessity. With mobile and internet technology evolving at the pace it is, doing business with anyone across the globe has become convenient. Having geographically distributed software teams is now par for the course anyway. Scrums, meetings to discuss product features, design, or other inquiries, can easily be done using collaboration tools or video conferencing.
  2. Access to Technology Experts:
    It is getting increasingly common to get horses for courses, the right expert for a specific task, during the development process. Perhaps among of the greatest advantages of outsourcing for a startup is the kind of access they can gain to an array of technology experts. Working with an established outsourcing organization gives you access to highly skilled technology experts whose contribution would help in developing a stronger, feature-rich and robust software product. These experts can also help in identifying ways to make the product better, and assess if the product can be developed in other cost effective ways. The advantage here is that the technology expert can be brought in for a specified period of time to perform a particular task without any long term commitment and the associated costs.
  3. Team Scaling:
    While the thought of an in-house development team sounds enticing, the reality is this restricting when startups need to ramp up teams or scale down because of the demands of the business. Hiring trained developers is not easy and is time-consuming and overstaffing is costly. Outsourcing gives startups the flexibility to add resources or reduce them according to the speed of development, project demands, and time-to-market amongst other considerations.
  4. Partner for Growth:
    There are many outsourcing companies that instead of a pure fee-based model, are willing opt for working on more innovative partnership models. Many are willing to offer their services for a stake in the company. The money saved can be used for other activities such as marketing and sales. This model works to the advantage of the startup as their outsourcing vendors become invested in the success of the company and partners in their progress. All the typical concerns that startups looking to outsource harbor, such as commitment, product quality, delivery timelines, communication etc. get resolved easily with this level of partnership.

All this, of course, presupposes that the outsourcing company will provide timely delivery of service, is resourceful in identifying new solutions and executing them expertly, and has deep technical implementation skills. Since no two development companies are the same, look for one who shares your vision and is willing to work with you as a partner. A great software product then becomes a natural consequence of this partnership.

Strategies for Security Testing

Online applications are becoming more and more sophisticated as the world gets more inter-networked. Enterprises now rely heavily on web applications for running their business and increasing revenue. However, as the sophistication of the application increases, we are also faced with more sophisticated vulnerabilities and application attacks designed to compromise the ability of an organization to conduct business. Application architects, designers, and developers are now focused on creating more secure application architectures and on designing and writing secure code. In order to make an application vulnerability-resistant, it is essential to have a strong strategy for security testing.

Where to begin Security Testing?
Embedding security testing in the development process is essential for revealing application layer security flaws. Thus, security testing must start right from the requirement gathering phase to understand the security requirements of the application. The end goal of security testing is to identify if an application is vulnerable to attacks, if the information system protects the data while maintaining functionality, any potential of information leakage, and to assess how the application behaves when faced with a malicious attack.

Security testing is also an aspect of functional testing since there are some basic security tests that are a part of functional testing. But security testing needs to be planned and executed separately. Unlike functional testing that validates what the testers know should be true, security testing focuses on the unknown elements and tests the infinite ways that can application can be broken.

Types of Security Testing:
To develop a secure application, security testers need to conduct the following tests:

  1. Vulnerability Scanning:
    Vulnerability scanning tests the entire system under test to detect system vulnerabilities, loopholes, and suspicious vulnerable signatures. This scan detects and classifies the system weaknesses and also predicts the effectiveness of the countermeasures that have been taken.
  2. Penetration Testing:
    A penetration test, also called a pen test, is a simulated test that mimics an attack by a hacker on the system that is being tested. This test entails gathering information about the system and identifying entry points into the application and attempting a break-in to determine the security weakness of the application. This test is like a ‘white hat attack’. The testing includes targeted testing where the IT team and the security testers work together, external testing that tests the externally visible entry points such as servers, devices, domain names etc., internal testing that is conducted behind a firewall by an authorized user, and blind and double blind testing to check how the application behaves in the event of a real attack.
  3. Security Risk Assessment:
    This testing involves the assessment of the risk of the security system by reviewing and analyzing potential risks. These risks are then classified into high, medium and low categories based on their severity level. Defining the right mitigation strategies based on the security posture of the application then follows. Security audits to check for service access points, inter-network, and intra-network access, and data protection is conducted at this level.
  4. Ethical Hacking:
    Ethical hacking uses a classified specialist to enter the system mimicking the manner of actual hackers. The application is attacked from within to expose security flaws and vulnerabilities, and to identify potential threats that malicious hackers might take advantage of.
  5. Security Scanning:
    To enhance the scope of security testing, testers should conduct security scans to evaluate network weakness. Each scan sends malicious requests to the system and testers must check for behavior that could indicate a security vulnerability. SQL Injection, XPath Injection, XML Bomb, Malicious Attachment, Invalid Types, Malformed XML, Cross Site Scripting etc. are some of the scans that need to be run to check for vulnerabilities which are then studied at length, analyzed and then fixed.
  6. Access Control Testing:
    Access Control testing ensures that the application under testing can only be accessed by the authorized and legitimate users. The objective of this test is to assess the differentiating policy of the software components and ensure that the application implementation conforms to the security policies and protects the system from unauthorized users.

Having a security testing plan that functions in alignment with the speed of development becomes essential. The stakeholders can then derive actionable insights from the conducted tests. They achieve a comprehensive vulnerability assessment and ensure that even the most minor chink is corrected at the earliest. By proactively conducting security testing across the software development lifecycle, organizations can ensure that unforeseen, intentional and unintentional actions do not stall the application at any stage.

Keep an eye out for our future blog where we detail how security testing can be included in each stage of the development cycle.

Private vs. Public vs. Hybrid Cloud – When to choose what?

Today, it’s fair to say that almost all organizations have either moved or are planning a move to the cloud owing to the operational flexibility it offers. Statistics prove as much – consider these as a sample. According to the “2016 State of the Cloud Report” by RightScale, in 2016, the Private Cloud adoption rate stands at 77%, up from 63% in 2015. Hybrid cloud adoption rates increased to 71% in 2016 from 58% year on year, and Enterprise Cloud adoption increased to 31% from 13% in 2015. Gartner estimates that the global public cloud market is expected to grow approximately 18% in 2017 to a tick over to USD$246.8 billion. Further, almost 74% of tech CFO’s credit cloud computing for delivering the most measurable impact on their business this year. Given the wide scale adoption, over the years we have witnessed three main cloud models appear – private, public and hybrid clouds. However, the question remains, which one is the most suitable for your enterprise? In this blog, we take a look at these three models and assess when to use which one.

The Public Cloud

In the Public Cloud space, Windows Azure, Amazon Cloud Services and Rackspace are big players. Amazon elastic compute cloud (EC2) for example, provides the infrastructure and services over the public internet and are hosted at the cloud vendor’s premises. The general public, SMEs or large enterprise groups can leverage this cloud model. Here the infrastructure is owned by the company that provides the cloud services. In a public cloud, the infrastructure and services are provisioned from a remote location hosted at the cloud provider’s datacenter and the customer has no control and limited visibility over where the service is hosted. But they can use those services anytime anywhere as needed. In the Public Cloud, the core computing infrastructure is shared among several organizations. That said, each organization’s data, applications, and infrastructure are separated and can only be accessed by the authorized personnel.

The Public Cloud offers advantages such as low cost of ownership, automated deployments, scalability and also reliability. The Public Cloud is well suited for the following:

  • Data storage
  • Data Archival
  • Application Hosting
  • Latency intolerant or mission critical web tiers
  • On demand hosting for microsite and application.
  • Auto-scaling environment for large applications.

The Private Cloud

A Private Cloud, as the name suggests, is a cloud infrastructure that is meant for use exclusively by a single organization. The cloud is then owned, managed and operated exclusively by the organization or by a third-party vendor or both together. In this cloud model, the infrastructure is provisioned on the organization premise but may be hosted in a third-party data center. However, in most cases a Private Cloud infrastructure is implemented and hosted in an on-premise data center using a virtualization layer. Private cloud environments offer greater configurability support to any application and even support those legacy applications that suffer from performance issues in Public Clouds.

While the Private Cloud offers the greatest level of control and security, it does demand that the organization purchase and maintain all the infrastructure and acquire and retain the skill to do so. This makes the Private Cloud significantly more expensive and a not-so-viable option for small or mid-sized organizations.

Choosing a Private Cloud makes sense for:

  • Organization that demand strict security, latency, regulatory and data privacy levels.
  • Organizations that are highly regulated and need data hosted privately and securely.
  • Organizations that are large enough to support the costs that go into running a next-gen cloud data center.
  • Organizations that need high-performance access to a filesystem such as in media companies.
  • Hosting applications that have predictable usage patterns and demand low storage costs.
  • Organizations that demand greater adaptability, configurability, and flexibility.
  • Hosting business critical data and applications.

The Hybrid Cloud

So, what does an organization do when it wants to leverage the cloud both for its efficiency and cost saving but also wants security, privacy, and control? It looks at the Hybrid Cloud which almost serves as a mid-way point between Public and Private Cloud. The Hybrid Cloud uses a combination of at least one Private and one Public Cloud. The Private Cloud can be on-premise or even a virtual private cloud located outside the organization’s data center. A Hybrid Cloud can also consist of multiple Private and Public Clouds and may use many active servers, physical or virtualized, which are not a part of the Private Cloud. With the Hybrid Cloud, organizations can keep each business aspect in the most efficient cloud format possible. However, with the Hybrid Cloud, organizations have to manage multiple security platforms and aspects and also ensure that all the cloud properties can communicate seamlessly with one another.

A Hybrid Cloud is best suited for:

  • Large organizations that want the flexibility and scalability as offered by the public cloud.
  • Organizations that offer services for vertical markets- customer interactions can be hosted in the Public Cloud while company data can be hosted in the Private Cloud.
  • Organizations that demand greater operational flexibility and scalability. For them, mission critical data can be hosted on the Private Cloud and application development and testing can take place in the Public Cloud.

Given today’s’ dynamic and increasingly complex business environment, organizations have to constantly reevaluate their cloud infrastructure, whether Public, Private or Hybrid, to ensure that the cloud delivers on its promise. Since there are different security and management demands for each of these cloud models, organizations have to ensure that they select their application candidates for the cloud wisely so that they can foster innovation and improve agility by leveraging their IT resources optimally. What would your choice be?

What should startups look for while choosing their technology stack?

Look at any business today and you will find a compelling dependence on technology. Today, technology also forms the core of any successful startup. When it comes to startups, it has sometimes been seen that while entrepreneurs focus on building the front end of their business, the job of choosing the right product technology stack features low on the priority list…almost as an afterthought.

The right choice of technology stack for product development contributes greatly to the efficiency and smooth running of a start-up. Ensuring that the right technologies are being leveraged ensures that you release on time. At the same time, given the overwhelming number of technology options, this can be a tough decision to make as well.

Many non-technical founders tend to depend on developer opinion when choosing a technology stack. This sometimes can backfire as developers can be biased towards particular technologies depending on their proficiency and comfort level. They also might assess the technology based on its technical merits rather than on the business requirements. Technology options need to be evaluated more objectively and here we take a look at some business considerations that need to be made before choosing a technology stack for building the product that will define your startup.

Usability

One of the primary considerations before making a technology selection is to first identify how and for what the technology will be used. The usage aspect heavily influences a technology decision as a technology that works perfectly for developing an eCommerce website might not necessarily be best suited for an enterprise mobile application. ‘Purpose’, thus, ranks the highest when selecting a technology. The technology stack has to be such that it fulfills the requirement demands and helps in establishing the business.

UI and UX Focus

The consumer of today goes by the principle of ‘don’t make me think’. Having high-end user experiences thus becomes of paramount importance. Simple, intuitive and intelligent user interfaces that facilitate a seamless user experience are a must. Technology choices have to be made such that they act as enablers of usability and allow the application users to be consistently productive in their work.

Talent Availability
You might want to choose the next hot technology on the block but if you cannot find the talent to work with this technology then all you’ll be stuck! This, for startups, can be a big financial drain. For example, finding developer talent to create a chat server with Erlang may prove harder than finding developers proficient in Java, Ruby or Python. Leveraging mainstream technologies that are open source and opting for a development methodology such as Agile or DevOps with a heavy testing focus is a good idea. This will give your startup the advantage of getting to market faster, rapidly shipping code and getting the desired features to the users at the earliest.

Technology Maturity
Startups need to look at the maturity of the technology before selecting a particular technology to ensure that the technology is built to last. Programming languages such as Ruby are relatively recent but have gone through several iterations and has now achieved language maturity. Mature technologies also give startups the benefit of a mature tools ecosystem that allows bug tracking, code analysis, facilitate continuous development and continuous integration etc. all of which make development faster and easier.

When looking at technology maturity, it is also essential to assess how easily you can build and share solutions built on the technology stack. Leveraging a technology that has great third party packages or ready to use, community generated code or a complete suite of easy to use and build solutions, or automated testing capabilities helps in not only attracting more developers but also helps in making development quicker and convenient.

Technology Dependencies

All it takes is one weak link to bring down a large fortress. Take the case of the Heartbleed bug which was caused because of the OpenSSL component in the library. When this bug was introduced, every technology that leveraged this widely used cryptographic library was affected. This just goes to show that when making a technology choice you have to ensure that the primary and secondary technologies are robust and secure and that their dependencies can be managed easily. So if for example, you are looking at Ruby on Rails, you should know that Rails (the Framework) is the secondary technology since it relies on Ruby (the primary technology) and that Ruby will also have its own set of dependencies. To leverage the two well you need to know the risks of both.

Scalability and Accessibility
Technology choices should support the demands of a growing business. The technology that a startup chooses thus has to allow for adding more users over time, add new functionalities or services, allow iterations, and enable integration with higher technologies. These days, looking at technologies that support a Service Oriented Architecture or SOA, gives more scope for extensibility to a startup by accommodating changes and iterations according to the needs or the market or product evolution demands.
Along with this, startups also have to ensure that the technology choice that they make allows for greater accessibility and security to allow business users access the product or service anytime anywhere.

Community Support
Community support might not rank highly in the startup technology choices priority list, but it probably should. Why? Simply because, as a startup, you can do with all the help that you can get. Along with this, a strong developer network and back-end support emerge as crucial resources when you are exploring the technology to either solve a problem or add new functionalities.

When evaluating technology options, startups also need to consider the maintenance needs of the technology, its compatibility capabilities, and security levels. Choosing the right technology is an imperative for the success of any startup. Startup entrepreneurs thus need to tick the right boxes when it comes to making the technology choice if they want to enable their startup to maximize their chances of success.

Our 5 Test Automation Best Practices

A misbehaving software that gathers bad reviews, customer complaints, and delivers a poor customer experience, will not only tarnish the product’s brand image but also hurt the public image of the company behind that brand too. As the pressure on delivering a high-quality product has risen, so has the emphasis on test automation. In fact, the Sauce Labs, “Testing trends in 2016: A survey of software professionals” reported that 60% of the organizations reporting had automated at least half, and in very few cases, all their testing efforts. That said though, most software teams have their own stories of how a majority of test automation initiatives do not pan out or fail in some significant way.

Our belief, as an organization that has built a significant test automation practice with a fair history of success if there are some simple things that need to be done right – if you want to see test automation success that is. Here are a 5 best practices to follow from our own vault of automated testing experiences.

  1. Understand the functionality and know what to Automate:
    Automating every test case does not really make sense, especially when the functionality or the features change frequently. Ideally, you need to understand the functionality of the software and create automated scripts for those scenarios that require testing with every release. With test automation, apart from regression test cases, you can cover smoke test and build acceptance tests. Assess the risk and try to identify the critical workflow. Then focus only on those workflows that do not require complex system checking or manual efforts. Automating everything under the sun will only lead to wastage of resources and time.Furthermore, as a bonus tip, a good understanding of the app or API, before trying to automate the test scripts, can prove vital.
  2. Choose the Right Automation Tool:
    With a variety of tools such as Selenium, SoapUI, TestingWhiz, and others out there, selecting the right tool can prove a daunting task. But, do understand that selecting the right tool is the first step on the ladder of test automation success. A wrong choice can prove costly – both in terms of time and effort and in actual money.Therefore, the automation tool you select should be compatible with the technology your app or API uses. Key considerations are the specifications such as technology it uses and the language it is built on. For example, if you’re developing an application using a particular language, the software you select should ideally offer that programming language to write scripts. This minimizes the learning curve and eliminates potential bottlenecks between the testers and developers.However, do remember that you need a set of resources with the skills to use the chosen tool to test and automate the test cases.
  3. Create Reusable Test Cases:
    A good automation framework is one that allows changes in the test cases as the product, and the business needs evolve. In this era of demanding customers, modifications in the software can occur in any release, at any time. And, if your test cases are too rigid or static, you may well end up spending more time testing your test suite, than in testing the product under test! Therefore, as far as the possible focus on creating modular test scripts that are less dependent on each other. Reusable test cases will help the test automation framework in the long-run.
  4. Identify Opportunities With Automation:
    When faced with a bunch of manual test cases to automate first seek the potential opportunities in automation. This may involve an organic expansion from one test case to another, one business case to another, and one user-scenario to another. For example, if you’re given a test case of the login page of a bank, you can easily expand the test case and make it data-driven. Do this by adding other possible scenarios such as invalid password, blank username, invalid username, password without a special character, invalid email, etc. This one test case can then test all the above scenarios in a single go. The objective, as always, is to seek to deliver maximum bang for the buck.
  5. Avoid GUI Automation:
    Ok, this may be a bit controversial, but remember the idea to expend resources most efficiently. Most product folks will agree that GUI automation is the toughest nut to crack in software testing. Therefore, if you can easily achieve the desired results by not automating the GUI, you will save your organization some useful dollars. Focus on using other methods such as command line inputs as an alternative to GUI automation. Expending too much energy on GUI Test Automation carries the risk of making the test automation slower and also of potentially increasing the risk of errors. Our understanding is that all organizations are constrained by resources – time, effort, and money are all limited. Under the circumstances rather than focus on automating the testing of the GUI – look for other promising candidates.

These are some of our secrets – we are quite sure that you will have your own little tips and tricks that have made test automation work like a charm for you too. Go ahead, don’t feel shy, tell us what are your “best practices” too – that’s what the comments sections is for!

Is there room for a more secure Android in Enterprise Mobility?

Who’s the taker of the enterprise mobility throne – Android or iOS? While Android has a handy lead in the global smartphone market, it has been iOS that has been marginally better received in the enterprise. Security concerns, problems with device management etc. are a few hits against Android in the enterprise. The Citrix Enterprise Mobility Cloud Report revealed that iOS had the largest share of the enterprise mobility market in Europe with at an adoption rate of 46%. Android followed close on iOS’ heels with 36% market share. While the advantage with iOS was that it gave IT a homogeneous mobile operating environment which translated to fewer lags, Android, being open source, did not secure much favor from IT since it may have been more susceptible to malware and attacks. Google, however, has worked judiciously to bring Android up to speed with enterprise needs and worked meticulously to put those IT concerns to rest.

Android has made security as a top priority since their Lollipop days. To strengthen their enterprise ambitions, Google launched Android at Work. This introduced a common set of API’s that were built directly into Android to beef up security and manageability. Android for Work allows Android devices to work in different models. It allows devices to operate in a work profile that separates and encrypts corporate data within the OS and hence can be used on corporate issued or privately owned devices as well as for single use devices such as kiosks.

With each new version, Android has worked towards becoming more secure and more manageable. Android for Work gives enterprises a consistent platform for managing Android devices securely. Data separation, standardized management, security controls etc. all allow organizations to use Android devices without the worry of jeopardizing their business data. Following the public disclosure of a bug in Stagefright, Google launched a monthly updates program in 2015 with the aim of accelerating patching of vulnerabilities across the android device range. In 2016, more than 735 million devices and 200 plus device manufacturers received a platform security update. Along with this, Android also released monthly security updates for Android devices running Android 4.4.4 and up. They also leveraged their carrier and hardware partners to expand update deployments and managed to release updates in the last quarter of 2016 for over half of the top 50 devices worldwide.

Additionally, to increase device security, Android has streamlined their security updates program to make it easier for manufacturers to deploy security patches. They have also been releasing A/B updates that make it easier for users to apply the patches.

The release of Android 7.0 Nougat further strengthened the Android security features. These features could now be applied to the work applications itself instead of the device, making device security easier to manage. Nougat also added ‘always on VPN’ that protects work network traffic data and ensures that data does not travel over unsecured connections. They have also implemented a separate password for work applications and further expanded the multiple security layers that come with built-in Android.

Enterprises can benefit greatly from the built-in nature of enterprise features in Android. A recent example of the same is The World Bank Group that has used these built-in enterprise features to mobilize their workforce and ensure that workforce productivity does not decline on the go. Using Android’s work profile and VPN support, The World Bank has enabled its employees to access sensitive data and yet managed to keep it secure.

Android has also made it easier for administrators to manage a plethora of Android devices. Android Nougat makes it easier for IT admins to suspend app access when not compliant with work policies. Additionally, the QR code provisioning makes it easier to deploy managed devices faster. It also allows customizations of device policy and supports messages in settings.

Android for Work also has a DevHub which is a community of developers who collaborate extensively and share best practices on Android enterprise applications. Along with this, there is the AppConfig Community that has standard Android for Work configurations for developing enterprise applications. This aids development of Android enterprise applications, set up managed profiles and configurations, and also develop single-use solutions for Android devices

Google also has been working proactively to keep the users safe from PHA’s or Potentially Harmful Apps that put devices and data at risk. Over the years systems have been created that review applications for unsafe behavior. Verify Apps checks devices for PHA’s. In 2016, Verify Apps conducted 750 million daily checks that helped in reducing PHA installation rates in android devices. In 2016, the number of Trojans reduced by 51.5% in comparison to 2015. Additionally, the number of phishing apps reduced by 73.4% when compared to 2015.

The steps Google is taking to strengthen the Android ecosystem is a clear indication of what their enterprise goals are. The iterations of Android and Android at Work reflect that Google is moving in the right direction with their enterprise ambitions. Though iOS might have a larger share of the enterprise pie at the current moment, it is evident that it’s only a matter of time before IT starts considering Android as a serious contender for enterprise mobility. Will it be your choice?

Top 5 Things CEOs Look For When Choosing a Technology Partner

CEOs are driven by efficiency and progress. They want to ensure their company is not missing out on opportunities that can streamline their processes and impact the bottom-line. With technology being ubiquitous, CEOs strive to leverage new-age methods to boost productivity, improve efficiency, and most importantly, respond to the needs of their customers. Hence, the need to choose a technology partner that works hand-to-hand with them towards their business goals.

CEOs prefer a technology partner who can fit into the company culture and add value. They want a technology partner who is in it for the long-term. They worry that a wrong choice can seriously set them back – literally as well as figuratively. That’s why CEOs spend so many cycles thoroughly evaluating technology partners before on-boarding them.

CEOs have a long list of standards to consider in the quest for the right technology partner. If you are aspiring to be a top company’s technology partner, you might ask yourself—just how will you position yourself as the right partner for any company?

To my mind, these are the top 5 things that CEOs look for when choosing a technology partner

Expertise
Your expertise and experience are the keys to your success. It’s not enough to have aggressive sales folks that talk the talk or to have attractive promotional offers. CEOs will usually prioritize capabilities over cost. You need to communicate that you understand how your technology expertise fits into their needs and how you and your approach are going to help the company achieve their goals. You need to demonstrate your experience, through client lists, testimonials, and case studies. A portfolio of the appropriate projects will help.

People
CEOs are often seeking to extend their own engineering capabilities while engaging a technology partner. This is where your people will have to play at the same level as their internal engineering team. CEOs on this quest, thus prefer technology partners who have a pool of qualified, well-trained technical people with the personal experience, and the skills to match. The strong technical experience of your resources increases your chances of being selected, as CEOs know they can rely on them for delivery and quality.

Predictability
Despite their risk-taking public persona, in this área at-least, CEOs are extremely risk averse people. They prefer a situation where they know what they are getting into, and dearly want to have the faith that what has been committed to them will get delivered – on time, in budget, and at the desired quality level. They will appreciate whatever you can do to create this aura of predictability. You may do this through a strong contract, well-defined processes, examples of the reports you provide your clients, and a believable promise of transparent visibility into what is going on at your end once the development kicks off.

A True Partnership
CEOs find partners who seek mutual benefit. The winning approach is one where it’s not just the technology company but the client that benefits too. From the outset, the business partnership should be more than a “money in, service rendered” relationship. Rather than just providing the bare minimum, show how you are ready to go above and beyond. Make them sure that you are willing to make their problems your own, and that you are willing to do what it takes to help them get ahead. Convince them that you are willing to take initiative, innovate, demonstrate ownership, in short, everything you would do for your own business.

Readiness for the Long Haul
As I have mentioned, this is an area where CEOs prefer not to experiment. So, if they have identified a company that has the ability to deliver to their expectations, they will likely want to form a long-term partnership. The onus is on you to drive home the point through your words, and even more than that, with your actions, that you are also in it for the long haul. Study their mission and vision and find a way to tie that into your company’s own goals. This will demonstrate your commitment and provide true value-add, rather than just a generic service provider. These CEOs want someone who has already understood the business. So even if the current opportunity is a short-term one, make sure you position your company as a potential long-term technology partner. Be the invaluable asset to CEOs. Be the calling card they won’t throw away.

CEOs have a tough job at the best of times. There are lots of tough choices to be made, and nowhere to pass the buck to when one of those choices go wrong. In this scenario when a pressing need to engage a technology business partner presents itself if you can help the CEO make the right choice, and then to prove that the decision was the right one – both you and your client stand to gain!

The Benefits and Challenges of Going Open Source

Open Source Software has woven itself smoothly into the fabric of Information Technology today. Whether we realize it or not, we rely heavily on open-source software today…did you know that open source software powers more than two-thirds of websites across the globe? The Future of Open Source Survey conducted by Black Duck Software and North Bridge revealed that more than 78% of business today use open-source software. 66% of the companies responding to the survey stated that they create software built on open-source for their clients and 64% of the companies participate in open-source projects.

One of the main reasons for businesses favoring open source software over proprietary software is that it is secure and free software and thus reduces the procurement barrier. However, those are not the only driving forces propelling the rise of open software. Since Open Source Software is built for reuse, it permits the code redistribution, modification and even copying without the worry of licensing protocols. This allows collaborative software development and levels the playing field with proprietary software development. Estimates are that open source saves businesses around USD $60 billion annually. In this blog, we take a look at some of the benefits and challenges of working with open-source software.

The Benefits of Open Source Software:

Continuous Evolution = Better Code Quality
Open Source software is open to evolution as the developer community spread across the length and breadth of the globe modify it real-time, thereby improving the technology. This community is focused on identifying bugs and defects and making the necessary adjustments to the code to solve the problem on a proactive basis. The open source community also works proactively on identifying what more the code needs to do in order to better its performance. Strong code review practices also ultimately result in better code quality and stronger product development.

Greater Customization
Unlike proprietary software, open-source software gives organizations the benefit of modifying the code to create solutions that meet their specific demands. They can add or delete functionalities and adapt it to the needs of the users. This gives organizations the capability to make improvements and enhancements. However, it is imperative to note that while making modifications to the source code, it becomes important to ensure that the changes are contributed back upstream. Failure to do so can lead to complexities while upgrading the software.

Avoiding Vendor Lock-in
Undoubtedly, one of the greatest advantages of open source software is that it helps organizations avoid vendor lock-ins. This also makes it highly auditable. With open software, organizations have the advantage of long-term viability. Since the source code is available, a business does not need to pay the vendors for functionalities such as security. Additionally, they gain freedom from charges for product upgrades and support – these charges sometimes can be prohibitively high. Unlike proprietary software that uses closed formats, open-source software uses open formats which delete the need for reverse engineering as well.

Continuous Availability
In the event of a commercial proprietary software vendor closing operations or getting acquired, there is no guarantee that their software products will be available for use, will be updated timely, or even supported. During such an eventuality, switching products are inevitable and yet expensive and hard especially if there was a heavy investment made in the current product. Even during times of good health, proprietary software companies can choose to render older software redundant and not support the format versions. Since the source code is not ‘owned’ by any one person or organization in open source, it becomes much easier to avoid such dangers as the product’s survival is almost guaranteed.

Along with this, there are many other advantages of using open-source software such as greater resource availability, great community support, security, simpler license management, integrated management, easy scalability to name a few.

The Challenges of Open Source Software

Vulnerability
Since many people have access to the source code it can make it susceptible to vulnerabilities as not everyone dealing with the code has the best intentions. While most open source contributors use their access to spot defects and fix them, there are those who can exploit this access to create vulnerabilities and introduce bugs to pollute the hardware and sometimes even steal identities. This challenge doesn’t exist with proprietary software as the licensing company has strict quality control processes in place that ensures that security parameters are not violated.

Steep Learning Curve
Open source software may not be very easy and straightforward to use. Operating systems such as Linux are said to have a significantly steeper learning curve and cannot be mastered in a short span of time. Even though Linux is superior technically to other proprietary software’s, many users find it hard to work with. Hiring the right resources to fill the skills gap often becomes a tedious task.

Inadequate Support
Though the open source community is very large, sometimes getting support to fix a problem could take more time. Since open source depends on the community to resolve and fix issues, the issue is addressed when the community has the time to review the problem. Also, in open source software, no one really knows who ideated, designed and created the product. In the case of a non-functioning program, it, therefore, becomes hard to identify who is liable during such events. Additionally, organizations might also incur hidden costs in the form of purchasing external support services.

Just like proprietary software, open source software, too, sometimes holds the risk of abandonment. If the main invested programmers lose interest in the product they can abandon it and move on to the next big thing. Another consideration is that when using open-source software it is also essential to do a compatibility analysis to assess if the hardware platform is compatible with the open source platform.

Despite the challenges, open-source focuses on collaboration, community participation, and volunteering… all these factors aid developing high-quality, highly customized products using the latest technology. A quote from Tim O’Reilly, founder of O’Reilly Media and one person responsible for popularizing the term open-source, sums up the reason behind great open source adoption and success – “Empowerment of individuals is a key part of what makes open source work, since, in the end, innovations tend to come from small groups, not from large, structured efforts.”

Ensuring High Productivity Even With Distributed Engineering Teams

The traditional workspace has been witnessing an overhaul. From traditional cubicles to the open office concept to the standing workspace… new trends to increase employee productivity arise every day. One such concept, a fundamentally different way of working when it arrived, has now cemented its place in the industry – that of the distributed teams. Successful implementation of a distributed workforce by companies such as Mozilla, GitHub, MySQL, Buffer, WordPress, and more are a testament to the fact that geographical boundaries cannot deter employee productivity and accountability. In fact, WordPress has over 200 employees distributed all across the globe and contributing successfully in their individual job roles.

Having a distributed workforce has definite advantages. It brings more diversity in business, provides new perspectives to problem-solving, opens up a wider pool of trained resources, and reduces operational costs. Further, a study conducted by BCG and WHU-Otto Beisheim School of Management showed that well managed distributed teams can outperform those who share an office space. However, ensuring high productivity of a distributed engineering team demands ninja-like management precision.

In our experience of years working in a distributed setup with our clients, we realized that one of the greatest benefits of having such a workforce was the immense intellectual capital we had harnessed. We now have some truly bright engineers working for us. Our client’s teams located in the United States and our team in India successfully collaborate on software projects without a hitch. Let’s take a look at how we make these distributed engineering teams work productively, focus on rapid application delivery and produce high-quality software each time.

Have a Well Defined Ecosystem

First, it is imperative to have a well-defined ecosystem for a distributed team to work and deliver high-quality applications in a cost-effective manner. You need to have the right processes, knowledge experts, accelerators, continuous evaluation of tools and technologies in use, strong testing practices, etc. Along with this, it’s key to establish clear communication processes and optimal documentation. Leverage business communication tools and dashboards for predictability, transparency, to avoid timeline overruns. Further, it is essential to have all the important project stakeholders such as the product owner, the team lead, the architecture owner etc. together at the beginning of each project to outline the scope and technical strategy for a uniform vision.

Have Designated Project Managers In Each Location

Distributed teams demand a hybrid approach to project management. It helps, but may not be essential, to have the stakeholders shouldering lead roles such as the architects and the project managers to be in the same location or the same time zone as the client. Along with this, it is also essential to have a lead who will serve as the single point of contact and will be the local team’s spokesperson to streamline communication, help the team stay on track and avoid delivery delays.

Appropriate Work Allocation and Accountability
Appropriate work allocation is an essential ingredient that can make or break distributed engineering teams. Instead of assigning work based on location, it should be assigned based on team capacity, skills, and the release and sprint goals. Having cross-functional teams that can work independently with inputs from the product owner help considerably in increasing team productivity so that work can be redistributed in the case of sprint backlogs. Giving each team member ownership of a feature can also increase accountability, measurability and ultimately the productivity of the entire team.

Have a Common Engineering and Development Language
At the onset of the project, it is essential to establish the engineering and development language for project success. Having clearly outlined development procedures, code styles, standards, and patterns contributes to building a strong product irrespective of the teams’ locational distribution as the code merges and integrations are likely to have much fewer defects. It is also important to have aligned and standardized tools to avoid spending time understanding or troubleshooting tool configurations or properties. Having a team buy-in regarding the engineering methodology (are you going to use TDD, BDD or traditional agile etc.?) helps in eliminating subjectivity and ambiguity regarding the same. It is essential to also have clearly outlined coding standards, technologies of choice, tools, and architectural design to avoid misalignment of values and engineering standards.

Such relevant information should also be published and maintained in the shared community (a virtual community across the distributed teams that serves as a single information source) using tools and dashboards that provide comprehensive information at a glance even for the uninitiated.

Leverage Time Zones Optimally

In order to ensure the same level of communication in a distributed team as found in a co-located team, there has to be impeccable time zone management by establishing some overlapping work hours. By doing so it becomes easier to involve the key stakeholders in sprint planning, sprint review, daily stand-ups, retrospectives etc. In the case of a distributed team, it makes sense to break down sprint planning into two parts – one, which determines what each team is doing on a high level and develops the understanding of Sprint backlogs and dependencies. And two, for detail clarification and breaking down of stories into ‘tasks’. It is also important to have a remote proxy for Sprint review to establish what each local team has completed.

Testing is another important aspect that can impact the productivity of distributed engineering teams. Since most distributed teams leverage ‘Follow the Sun’ principle, activities such as testing can be passed on to the other time zone. So, by the time the development team is back to work, the testing work is already done. This can significantly improve the productivity of the engineering team.

Have An Integrated Code Base

When working towards ensuring the productivity of distributed engineering teams it is imperative to have a single code repository to ensure that everyone checks the same code base. Ensuring that all teams have access to the same CI server to ensure that all builds and tests are run against any iterations prevents build breakages and the eventual productivity loss. Along with this, it is also essential to have a hot back-up server in each location to battle adversities such as server downtimes, power outages etc.

Along with all this, there is another critical ingredient that helps in making distributed engineering teams more productive…trust. It is essential for the distributed teams to trust one another and function as a single cohesive unit. Understanding cultural differences, respecting time zones, and having clear communication between team members are few things can build trust within team members, foster collaboration and contribute towards creating a highly productive distributed engineering team. That’s ours- what’s your story about distributed engineering teams?

Acceptance Criteria vs. Acceptance Tests – Know the Difference

Testing is at the heart of new development methodologies such as Behavior Driven Development, Test Driven Development and of course, Agile. In a previous blog on the role of testing in Behavior driven development we touched upon two topics, Acceptance Tests and Acceptance Criteria and how BDD has changed the approach towards these testing stages. In this blog, we take a look at these similar sounding and yet very different concepts.

It thus becomes essential to first define what the product is expected to do and the conditions it must satisfy to be accepted by a user. In order to achieve this, testers need to flesh out comprehensive ‘user stories’, then iterate criteria specific to each of these user stories and define the value proposition, characteristics of the solution and user flow. Testers then need to develop test cases based on these user stories and define conditions that need to be satisfied for the product to be “acceptable” by a user. These set of conditions that define the set of standards that the product or piece of software must meet are called ‘Acceptance Criteria’.

Loosely speaking, Acceptance Criteria documents the expected behavior of a product feature. It also takes into account cases that could have been missed by the testing team while developing test cases. Defining the Acceptance Criteria is the first testing step that comes after writing user stories. Usually, the Acceptance Criteria is concise, largely conceptual, and also captures the potential failure scenarios. Acceptance Criteria are also called ‘Conditions of Satisfaction’. These consist of a set of statements that specify the functional, non-functional and performance requirements at the existing stage of the project with a clear pass or fail result. Defined Acceptance Criteria outline the parameters of the user story and determine when a user story is completed.

Acceptance Criteria should always be written before development commences so that it can successfully capture the customer intent rather than iterate functionalities in relation to the development reality. Acceptance Criteria thus should be written clearly in a simple language that even non-technical people, such as the customer and product owner, can understand. The idea behind writing an Acceptance Criteria is to state the intent but not the solution and hence it should define ‘what’ to expect rather than ‘how’ to achieve or implement a particular functionality.

What is Acceptance Tests?

Acceptance Testing is the process that verifies if the installed piece of code or software works as designed for the user. It is a validation activity that uses test cases which cover scenarios under which the software is expected to be used and is conducted in a ‘production-like’ environment on hardware that is similar to what the user or the customer will use. Acceptance Tests assert the functional correctness of the code and hence contain detailed specifications of the system behavior in relation to all meaningful scenarios. Unlike Acceptance Criteria that defines the expected behavior of a particular feature, Acceptance Tests ensure that the features are working correctly and defines the behavior of the system and hence demand more detailed documentation. Acceptance Tests check the reliability and availability of the code using stress tests. This also checks the scalability, usability, maintainability, configurability and security of the software being developed and determines whether a developed system satisfies the Acceptance Criteria and checks if the user story is correctly implemented.

Acceptance Tests can be written in the same language as the code itself. So, these tests can be written in Gherkin language used commonly in Behavior Driven Development.

While Acceptance Criteria are developed prior to the development phase by the product owners or business analysts, Acceptance Tests may be implemented during product development. They are detailed expressions that are implemented in the code itself by the developers and the testers. Acceptance Testing is usually performed after System Testing before the system is made available for customer use. To put it simply, Acceptance Tests ensure that the user requirements are captured in a directly verifiable manner and also ensure that any problems that were not identified during integration or unit tests are captured and subsequently corrected.

There are two kinds of Acceptance Testing, namely:

Internal Acceptance Testing
Performed in-house by members who are not involved in the development and testing of the project to ensure that the system works as designed. This type of testing is also called Alpha Testing.
External Acceptance Testing – This testing is of two types:
a) Customer Acceptance Testing – Where the customer does the testing.
b) Beta Testing or User Acceptance Testing – Where the end users test the product.

Conclusion:
In conclusion, we can say that amongst other things, the main difference between Acceptance Criteria and Acceptance Testing lies in the fact that while the former defines ‘what needs to be done’ the latter defines ‘how it should be done’. Simply put, Acceptance Tests complete the story started by Acceptance Criteria and both together make sure that the story is complete and of high functional value.

Behavior Driven Development and Automation Testing

Organizations across the globe are feeling the pressure to churn out error free products faster and reduce the time to market. This has led to the growth of new development methodologies that put testing in the heart of product development and foster a growing collaboration between testers and developers. Some of these methodologies have also driven an increased impetus on test automation. Behavior Driven Development or BDD is one such methodology followed in agile product development. BDD is often considered the extension of Test Driven Development. The focus of BDD is on identifying required behavior in the user story and writing acceptance tests based on them. BDD also aims to develop a common language to drive development so that the team members understand the critical behaviors expected of an application and realize what their actual deliverables are.

It has become imperative for the development team to understand the business goals a product is expected to achieve if they wish to deliver quality products within the shortest timeframe. BDD puts the customer at the heart of the development approach. This approach iterates these requirements, the business situation and the acceptance criteria in Gherkin Language. The Gherkin Language is domain and business driven and easy to understand. The BDD approach identifies the behaviors that contribute directly to business outcomes by describing them in a way that is accessible to developers, domain experts, and testers. BDD leans heavily on collaboration, as the features and requirements are written collaboratively by the Business Analysts, Quality Analysts and the developers, in the GWT i.e. ‘Given-When-Then’ scenarios. These ‘scenarios’ are then leveraged by the developers and testers for product development. One of the main advantages of using Behavior Driven Development is that it makes the conversation between developers and testers more organized and that this approach is written in plain language. However, since the scenarios are written in a natural language they have to be very well written in order to reduce maintenance woes which otherwise can become tedious and time-consuming. The focus of BDD is to ensure that the development vocabulary moves from being singularly ‘test based’ to ‘business based’.

Role of Test Automation in Behavior Driven Development

We believe that the role of testing and test automation is of primary importance to the success of any BDD initiative. Testers have to write tests that verify the behavior of the system or product being built. The test results generated are in the form of success stories of the features and hence are more readable by the non-technical user as well. For Behavior Driven Development to be successful it becomes essential to identify and verify only those behaviors that contribute directly to business outcomes.

Testers in the BDD environment have to identify what to test and what not to test, how much should be tested in one go and to understand why the test failed. It can be said that BDD rethinks the approach to Unit and Acceptance testing. The sense is that acceptance criteria should be defined in terms of ‘scenarios’ as expressed in the GWT format. Here ‘Given’ defines the preconditions or contextual steps used to define the test case, ‘When’ is the event or the steps that have been taken and ‘Then’ is the final outcome of the scenario. Much like Test Driven Development, BDD too advocates that tests should be written first and should describe the functionalities that can be matched to the requirements being tested. Given the breadth of the acceptance tests in BDD, test automation becomes a critical contributor to success.

Since Behavior Driven Development focuses on testing behavior instead of testing implementation it helps greatly when building detailed automated unit tests. Testers thus have to focus on writing test cases keeping the scenario rather than code implementation in mind. By doing so, even when the requirements change, the testers do not have to change the test, the inputs and outputs to accommodate it. This makes unit testing automation much faster, less tedious and more accurate.
Since test cases are derived directly from the feature file set ups and contain example sets, they make for easy implementation and do not demand extra information for the test data. The automated test suites validate the software in each build and also provide updated functional and technical documentation. This reduces development time and also helps in driving down maintenance costs.

Though Behavior Driven development has its sets of advantages, it can sometimes fall prey to oversimplifications. Testers and development teams thus need to understand that while failing a test is a guarantee that the product is not ready to go to market, passing a test also does not indicate that the product is ready for release. At the same time, this framework will only work successfully when there is close collaboration between development, testing and business teams, where each can be informed of the updates and the progress in a timely manner. It is only then, that cost overruns that stem from miscommunications can be avoided. Since the testing efforts are moved more towards automation and cover all business features and use cases, this framework ensures a high defect detection rate due to higher test coverage, faster changes, and timely releases.

Have you moved the BDD way in your development efforts? Do share what challenges you faced, and how did the effort pan out?

Automated Testing of Responsive Design – What’s On & What’s Not?

With growing digitization and increasing proliferation of the smartphone and tablets, it is hardly a wonder that mobile devices are geared to become the main drivers of internet traffic. The Visual Networking Index predicted that internet traffic would cross the zettabyte mark in 2016 and would double by 2019. It’s not just browsing, but commerce too that is becoming more mobile. Criteo’s State of Mobile Commerce report states that four out of ten transactions happen across multiple devices such as smartphones and tablets.

Clearly, we have established ourselves in the ‘mobile age’. Since mobile has evolved as such a big driver of the internet, it is but obvious that websites today have to be “responsive” to the screen size. In 2015, when Google launched their mobile friendly algorithm ‘responsive web design’ became a burning hot topic of discussion across the internet. Having a responsive design made sure that the user experience was uniform, seamless and fast, search engine optimization was preserved and the branding experience remained consistent.

The Testing Challenge To Automate Responsive Design
Responsive web design takes a single source code approach to web development and targets multiscreen delivery. It is on the basis of the screen size that the browser content customizes itself and also customizes what content to display and what to hide. Clearly, it becomes absolutely essential to test that the web application renders itself correctly irrespective of the screen size. Equally obviously, this demands multi-level testing. Given the sheer number and variety of mobile devices in the market and different operating systems, testing of responsive web designs can become an onerous task.

Replicating the end user experience to assess if the application renders itself well across the plethora of devices can be tricky…an application running on a desktop monitor will render differently when scaled down to an 1136-by-640 pixel screen of an iPhone. Testing of responsive applications hence means not only testing them across popular devices but also across newly launched devices in the market. Clearly, responsive websites need intensive testing but testing across so many devices, and testing on each available browser or operating system and choosing configurations of physical devices can be a challenge. This means more test case combinations across devices, operating systems and browsers, and verifying these combinations across the same code base.

In the testing of responsive designs, it becomes essential to check that the functionalities, visual layout and performance of the website is consistent across all the digital platforms and user conditions. This demands continuous testing of new features and testing that the website is working optimally across browsers, networks, devices and operating systems.

Given the intensiveness of testing, having a robust test automation framework for testing responsive applications is a must. This can dramatically increase the efficiency and thoroughness of the testing efforts.

Visual testing
In order to ensure that a responsive application responds to any device in a functionally correct manner, it is important to increase focus on UI testing. Given the complexity of responsive design, you need to identify all the DOM (Document Object Model) objects on the desktop as well as the mobile devices and add relevant UI checkpoints to check the visual displays. Alignment of text, controls, buttons, images, font size, text readability across resolutions, and screen sizes have to be tested thoroughly. Automating these tests ensure that any issue gets highlighted faster and the feedback loop becomes smaller thus ensuring that there are no application glitches.

Performance Testing
Slow page load times and wrong object sizes are two of the biggest challenges of responsive design. Given that an average website has over 400 objects, the key need is to ensure that the size properties of the objects do not change and the images load onto different viewpoints correctly. Functional tests in responsive web applications must be done keeping in mind real world conditions. This involves testing against usage conditions such as devices, network coverage, background apps, location etc. and ensuring that the web content displays correctly irrespective of the device size. Automating client side performance testing helps testers assess the time the content takes to load on different devices and assess the overall performance of the website. Memory utilization, stress tests, load tests, recovery tests etc. need to be performed extensively to assess application performance. Utilizing test automation to write comprehensive tests cases for the same makes performance testing much easier, faster, and in the end more Agile.

Device Testing
While it might not be possible to test the web design on each and every device available in the market, leveraging mobile device simulators to test application functionality goes a long way. You can test the application across all form factors, major OS versions and display density on the devices. Automating navigation testing helps testers gain greater test coverage of the user paths and allows for a faster end-to-end run-through of the responsive web application. With test automation, it becomes easier to create content breakpoints, test screen real estate, and also transition between responsive and non-responsive environments.

Regression Testing
Testers need to extensively adopt test automation to increase the scope of regression testing of responsive web applications. With each new functionality, testers have to make sure that there are no breaks and that the basic application functionality remains unaffected despite the new additions. Given that these tests are voluminous and must be repeated often, leveraging test automation for regression testing ensures the application performance remains unhindered.
To maximize the ROI from your automation initiative, it makes sense to turn to analytics and assess how the responsive web application is used. By leveraging analytics, testers can narrow down the choice for the device and network testing, identify breakpoints and easily assess what should appear on the screen when navigating from one breakpoint to another.

In a nutshell, by carefully choosing candidates for automation, testers can expedite the testing process, achieve greater test coverage and deliver a seamless user experience – and that’s always useful!

Flash Forward – From Flash to Adobe Animate CC

Adobe Flash has dominated some sectors like eLearning development for 20 years now and for a very long time, Flash was the last word in interactive, engaging websites too. Flash played a critical role in creating rich media content and this ease drove its wide applicability for eLearning courses and websites. In the early days of Flash, numerous businesses adapted Flash to create interactive web portals, games, and animated websites. Some of the notable names were Cartoon Network, Disney, Nike, Hewlett-Packard, Nokia, and GE. Flash saw further growth and penetration when Adobe introduced hardware accelerated Stage3D to develop product demonstrations and virtual tools. As Flash leaves its teens, though, the world has fundamentally changed.

There are multiple reasons why Flash needed a revamp. Its lack of touch support on smartphones, compatibility issues on iOS, need of Flash player to run content, and non-responsiveness were some of the major reasons that incited Apple to move away from Flash, and the die was cast.

Adobe recognized that it was time for a change and when Adobe announced a rechristened product Adobe Animate CC, better days seem to be coming for developers. We believe that Adobe did the right thing at the right time. With the new name came a more user-friendly outlook and a more market-focused product designed to keep up with the latest trends.

Most reviews of the product suggest the following reasons for you to look at Adobe Animate CC:

  1. Adobe Animate CC leverages the familiar tool User Interface like Flash for rich media creation and extends its support to HTML5 canvas and WebGL.
  2. Conversion of existing flash animations into HTML5 Canvas can be achieved without any issues. It can convert fairly lengthy animations also with ease.
  3. The Motion Editor provided in Animate CC allows granular controls over motion between properties making it much easier to create animations.
  4. Animate CC provides an output that can easily integrate into responsive HTML5 frameworks and that can scale based on the device size. It, however, doesn’t publish a fully responsive output by itself.
  5. Animate CC provides a library of reusable content to speed-up production and animation in HTML5.
  6. Animate CC provides multi-platform output and supports HTML5, WebGL, Flash, AIR, video and even custom extensions like SVG. It can also export animations in GIF to be shared online and GAF format that can be used on gaming platforms like Unity3D.
  7. Animate CC’s timeline feature optimizes audio syncing in animations which is a major plus over HTML5. It also enables easy control of audio looping.
  8. Videos can be exported in 4K quality using Animate CC – keeping up with the latest trends of video consumption preferences. Videos can have custom resolution and the latest Ultra HD and Hi-DPI displays.
  9. Animate CC also provides the ability to create vector brushes similar to Adobe Illustrator.
  10. Animate CC has added Typekit Integration in a tool-as-a-service that helps developers to choose from the library of high-quality fonts.

Some reviewers have commented that images did not load on some occasions while creating animations but that they did load when the tool was refreshed. However, this issue can be easily optimized by pre-loading images. There could also be other parameters like browser performance and network issues causing delay in image loading.

It has also been observed by some that some of the filters did not render the expected results in HTML5 output compromising on the visual quality and richness of the output. These filters are Gradient Glow, Gradient Bevel, Quality, Knockout, Inner Shadow, and Hide Object of the Drop Shadow Filter. Given the focus that Adobe has on the product – we anticipate these issues will surely get addressed in the future releases.

One interesting thing to note is that Animate CC has eliminated the dependency on Flash player completely though, it continues to support flash output. The tool also complies with the latest Interactive Advertising Bureau (IAB) guidelines that are widely used in the cartoon industry by giants like Nickelodeon and Titmouse Inc. For those seeking a much more in-depth feature comparison between Adobe Animate CC and Flash versions we recommend visiting Adobe’s website.

It’s early days yet but our view is that Animate CC could well be instantly applicable to over one third of the graphics created today that use Flash, and are delivered on more than a billion devices worldwide. Adobe Animate CC marks the beginning of new era for Flash Professionals, just around the time when Flash reaches its 20th Anniversary!