Top 7 Challenges in Mobile App Testing

According to Statista, it is projected that the Mobile app store revenues worldwide will grow to US $76.5 billion in the year 2017. A Marin Software study reveals that in the UK, mobile devices now account for 44.8% of ad impressions, 50% of clicks, 46% of spend and 43% of conversions. People’s obsessions with smartphones have enticed businesses to innovate and build interesting applications and also think of ways to improve their relationships with their customers through such apps.
Designing a user-friendly and killer mobile app is undoubtedly a difficult task but surprisingly, that’s a lesser problem for businesses today. With availability of many tools, technologies, and easy access to good talent, developing a mobile app has become relatively easy. But businesses are concerned about the testing of their mobile apps.
With millions and millions of options available on the app stores, users have become unforgiving – they uninstall the apps if they find those to be non-user-friendly, not serving the purpose, or worse, have errors.
The issues with mobile app testing –

Let’s analyze some of the key challenges of mobile app testing:

1. Usability and User Experience:

Stellar user experience is a must-have for mobile apps. The app testing needs to ensure that the apps are absolutely easy to use and the features do not confuse the users. The obvious features should be easily accessible on the screen and should provide the highest value for the users’ time. The user experience needs to be similar across all smartphones and platforms. App QA engineers need to ensure that they design and develop separate test cases for testing mobile apps because obviously the user experience is completely different than the desktop usage. Testers need to always remember and ensure the thumb rule of mobile usability – the user should be able to perform the desired task in less than 3 seconds!

2. Operating Systems:

As the usage of smartphones is growing, the users are also becoming smarter with the phone usage. They are using their phones to download newer apps, view websites, be active on social networking sites, make purchases, and also maintain business communications. As the phone demands are increasing and usage patterns are changing, the expectations from mobile operating systems are also growing. There are many mobile operating systems in the market today and each operating system has multiple versions. The complexity of supported platforms has gone to new level. When you make your app compatible with KitKat, Lollipop is already there and you start hearing the news about Marshmallow (you know what I mean!). Businesses need to make sure that their apps are truly device agnostic and work well on various operating systems and their versions. This problem becomes bigger when there are multiple mobile browsers and their versions to be tested.

 

mobile platforms

This Image is available courtesy of tech.dbagus.com

 

3. Screen Sizes:

In March 2015, Tim Cook announced that Apple has sold over 700 million iPhones in total. It is estimated that by the end of 2014 3 billion Android smartphones are sold. Then there are Windows phones and Blackberry too. While we have the numbers for the popular brands, there is also no dearth of many local players who are continuously launching new phones. Every new version of the phone possibly comes with a new screen size. Thanks to the changing mobile behaviors, consumers are adapting to and responding positively to the screen size changes. Businesses today have no choice but to tweak their mobile apps design and the behavior to adapt to the new phones and continue to offer exceptional user experience to all the users across various smartphones and screen sizes.
For every geography, the preferred choice of devices is different and therefore you might be able to cover 90% of your app users through a variety of 5-6 phones. However, if you need to test multiple mobile apps catering to a variety of audience in different geographical locations, if your mobile app testing lab has only 7-8 devices, looking at the vast smartphone market, you are probably covering only 25% of your customers.

4. Variety of Carrier Networks:

The apps which are supported across multiple geographical locations and are available in multiple languages need to be tested with various operators across multiple countries. This is very crucial because for many apps, the user experience and usability depends a lot on the performance of the available carrier’s network. The app testing challenges increase with such increased complexity.

 

carrier networks

This Image is available courtesy of ebay.ie

 

5. Battery Life:

Battery life has been biggest complaint of smartphone users and mobile users are very sensitive about the phone battery life. Every smartphone manufacturer is struggling to enable faster performance, better gaming, video viewing etc. while providing a long battery life. On top of this, if any app further drains the battery, then the users don’t hesitate to uninstall such apps. While app developers need to take care of battery consumption, it is also the responsibility of the testers to ensure that apart from the app features, usability, and stability, they test the apps for power consumption as well.

6. Security:

We all keep reading the stories about site hacking and data leaks. Businesses are also struggling to ensure apps security. Stats suggest that more than 50% of the apps don’t take enough precautions while revealing the secured information about the application or users and many apps don’t even have proper encryption methods. Mobile app testers need to have deep understanding of security testing.

7. Performance:

Mobile apps must account for limited and variable network bandwidth. Even a shared mobile network can have a significant impact on the performance of the app. The mobile apps users are very impatient with slow performance. A research by The Aberdeen Group has revealed that around 25 percent of app users abandon a mobile app if they experience a delay of more than three seconds. The performance testing is a fairly technical job which involves testing of numerous aspects such as CPU utilization, memory utilization, cache size availability, memory leakage by the app, internet data usage, offline data usage, caching, and number of round trips etc.

Conclusion:

Mobile apps testing is a more complicated and different ballgame. It requires a thorough knowledge of testing and QA methodologies, deep understanding of mobile apps space and also the understanding of multiple areas like technology, hardware, usability, user experience. The testers also need access to test labs to ensure maximum test coverage. It can be practically impossible as well as costly to create a test labs with multiple physical devices but testing only on simulators is not 100% reliable. Don’t rely on anyone who is not experienced in this field.

Will Windows 10 Change Application Development ?

In about 4 weeks Windows 10 has now been installed in over 75 Million PCs. Despite predicting a slow and sure adoption the estimates now are that roughly 358 Million PCs will move to Windows 10 in 12 months. Microsoft itself has set aim at 1 Billion devices running Windows 10 within 3 years. There is no mistaking Microsoft’s focus on Windows 10 – its preferred revenue platform for the future. Microsoft has publicly stated that they intend to use services and apps to generate revenue from their customers over their entire computing life cycle. The adoption rate in enterprises, as expected, is slower but even that is expected to pick up as support for the current favorite OS, Windows 7, starts drying up. Microsoft is also ramping up the Enterprise focus with IT Department friendly Windows 10 features like easier management and automatic configuration of devices and security improvements. With Windows 10 clearly here to stay what impact will be felt in the Application Development world?
window 10 image

This Image is available courtesy of msdn.microsoft.com

The major change seems to be driven the unified platform strategy. In many ways Windows 10 is the final step in Microsoft’s strategy to bring all its device platforms together into one, united Windows core. The objective is that every device, PC, Tablet, phone, game console and everything to come in the IoT world, should be able to run the same app- thus creating a universal app platform. In the official MSDN Blog introducing this “Universal App Platform” Microsoft’s Kevin Gallo laid out the goals for the platform as:

 

Universal app platform

This Image is available courtesy of blogs.windows.com

  • Driving scale through reach across device type.
  • Delivering unique experiences
  • Maximizing developer investments.

Let’s talk about Mobile OS’ -this platform independence will mean that apps developed by developers working on other operating systems like Android and iOS to Windows can be moved to universal apps seamlessly. This will help increase the number of Apps available to Windows mobile users and presumably drive up usage.

Then there are screen sizes – with so many form factors out there one of the big App Development challenges has traditionally been designing for different screen sizes. Windows 10 provides the ability to use a single UI that can adapt to large and small screen sizes making this task just that little bit easier.

Microsoft has highlighted unique experiences as a platform goal. One way Windows 10 hopes to create such experiences is through the many UI controls that have been provided. Users interface with the apps in several different ways. These controls have the capability to figure out just how and deliver an appropriate user experience. As an example a user using a laptop with a touch screen would get larger icons to select from than a more precise interface like say a mouse or a touchpad.

What about all those PCs out there, many of them in slower moving enterprises that are still on Windows 7 or 8 flavors? The good news for those developers developing desktop apps for these versions is they can now harmonize their existing .NET and Win32 content with Windows universal apps.

universal window platform

This Image is available courtesy of blogs.windows.com

How have the concerns about developer investments in time and effort been addressed? A significant step in the Windows 10 universal app is the inherent ability of websites to run within the Windows 10 Universal app and thus make the most of the system services. Website developers are saved the hassle of having to learn new languages and find it easier to get their apps on the Windows store.

ThinkSys Announces Its Platinum Sponsorship Of STARWEST Techwell Event, California

8/9/2015 September
ThinkSys, a boutique company delivering excellent, cost-effective and efficient IT solutions and testing services, announced that it is a platinum sponsor of STARWEST Techwell Event, happening at Anaheim, California in September. ThinkSys plans to launch Krypton, an innovative Regression Automation Testing Framework at the conference.

Over the last many years, ThinkSys has helped several Enterprises and ISVs across the world build quality software while reducing the cost of quality and ensuring improved time to market. ThinkSys has used its experience in building Krypton. Krypton, the low-cost automation solution, is ideal for testing websites, web-based applications, mobile websites, and mobile native apps.

STARWEST is the premier event for software testers and quality assurance professionals. With keynote sessions by thought-leaders in quality assurance and software testing, tutorials, several conference sessions covering crucial aspects of testing, training classes, and Test Lab, this is a must-attend event for every professional in quality assurance.

Rajiv Jain, the CEO of ThinkSys will be representing ThinkSys at the conference. Speaking on this occasion, Rajiv said, “Every software development involves frequent testing and that’s when companies are increasingly turning to test automation. Using the Krypton Framework, which we plan to launch at the conference, companies can make automation testing easy, reliable, and fast. It will also allow the managers to, better leverage the existing QA skills in a more productive way.”

Rajiv Jain will be speaking at the conference on ‘Why do QA Test Automation Projects Fail?’ on Wednesday, September 30, 2015 at 3:00 PM. This interactive session will throw some light on the practical aspects which organizations need to take care of while implementing their test automation strategy.

Meet the ThinkSys team at the Expo at booth number 35.

About ThinkSys Inc
ThinkSys, a global technology products & services company, helps customers improve and grow their business and e-commerce initiatives across the Web and mobile channels. ThinkSys develops, tests and implements high-quality, cost-effective solutions running in the cloud or on-premise. As a leader in web and mobile manual and test automation, performance and monitoring solutions, using its Krypton framework or other Industry tools, ThinkSys enables developers, QA Professionals, and management to help reduce time to market. ThinkSys is privately held and is headquartered in Sunnyvale, CA. For more information visit http://www.thinksys.com.

Characteristics of an Ace Test Automation Suite

“In some situations, the most important objective of testing is to find as many important bugs as possible. In other situations, finding bugs is not important at all. In yet other situations, bug-finding is only one of a number of important objectives. The wise test professional knows which situation she is in. “- Rex Black.

Characteristics of an Ace Test Automation Suite
There is no longer any need to make the case for Test Automation – the obvious value proposition has ensured that now software development projects in general and product development in particular always includes an allowance for test automation. The question really is what can be done to improve the chances of success of your own test automation efforts? What characteristics should a comprehensive Test Automation suite possess?
Architecture: Remember that the automation suite is, to all intents and purposes, a software product and hence its architecture is of prime importance. The best architecture emphasizes methodology, manageability and maintainability of the suite. The test methodology, essentially how the testing will be carried out, is more important than the technology of the day that will go into creating the suite. The product being tested will keep evolving, especially in these days of continuous delivery, so the suite has to be easy to update and scale.
Process: The success of a test automation strategy is highly dependent on how well the process is organized, including management of the test process and management of the tests themselves. The first implies a tight integration with the business. There is a need to be conscious of the issues the software product or project is looking to address. Efficient and effective involvement of business stake holders, users and auditors will become key.
Trackability: Among the top reasons to consider automation is making repetitive tests faster & easier. In these cases, chances are, you would be running the same tests against many devices or under many environments. A great test automation suite will ensure you are always able to keep track of exactly how the automation is faring – essentially give full visibility into what the automating configuration and compatibility testing achieving at all times.
Capability: In a nutshell the aim of test automation is to achieve more test coverage in a shorter time while reducing the chances of human error. That being said not all tests are the same – since you can never really achieve 100% test automation which tests should a great test automation suite prioritize?

  • Traditional wisdom has been a great test automation suite should help to automate the routine tasks like smoke tests and regressions tests – the rationale is sound.
  • Our view is that a test automation suite should also seek to extend the possibilities of normal testing – in many ways this suggests that an outstanding test automation suite will be one that taken on more than is possible with manual testing – a suite that helps execute those test cases that are difficult to execute manually.
  • We have already mentioned cross-platform test cases like different OS’s, browsers and platforms. These are great tests to try to automate given that they need to be performed repeatedly – a fit case for automation.
  • We have spoken of how the road to success lies in ensuring the test automation is integrated into the business logic. This suggests that a test automation suite that effectively automates the testing of complex business logic would be another fit case for automation.

Boris Beizer said “More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.” That’s an onerous load for testing in general and test automation in particular to bear but the best test automation suites out there have that capability – happy testing!

Test Automation – At Home in an Agile Environment

“A ‘passing’ test doesn’t mean ‘no problem.’ It means no problem ‘observed’. This time. With these inputs. So far. On my machine.” – Michael Bolton
In 2013 as many as 88% of the organizations responding to a Version One “State of Agile Development” survey confirmed that they were practicing agile development and that number has only gone up since. Agile, obviously, is defined by an approach of short sprints, iterative development and short release cycles. Given the apparent time pressure on the test cycles testing expert Bolton’s tune would ring true to many in software product development today (sorry couldn’t resist the lame pun). The objective is faster testing and more code coverage so less “technical debt” is passed on. So what’s the way out? Many have considered test automation to be the answer.

Test Automation – At Home in an Agile Environment

This Image is available courtesy of maisasolutions.com

The faster cycles in Agile development mean the time available to test is shorter – an excellent case for automating the testing. Each successive release also means more features added and hence more code to be tested – more test cases to be covered in the same or less time. This would be practically impossible to do without automation. The iterative development approach also means a need for more robust regression tests to check that new releases don’t break stuff already fixed in previous versions – again a strong case for a well put together automation suite. So, that seems to be quite categorical – agile product development absolutely needs test automation.

It seems important, thus, to start at the beginning and make test automation a consideration when the product is being designed – essentially design tests when the product and its features are being designed. This would allow for the test automation strategy to be based on what the product is expected to do rather than on specific iterations of the code. This could also allow designing automated tests that test at layers below the GUI and such which are impacted more at each iteration.

Assuming that test automation has the benefit of having been part of the product planning an Agile, read iterative, approach can also work in building a complete regression suite. Essentially this would mean building automation only for those features carried over into the current version from the previous version. The focus would be on those features that have become stable. Over the course of a few sprints as the features add up the automation of their unit tests would too leading to a regression suite that offers more or less complete coverage. A practical variant of this method is to divide the creation of the suite into parts & approach each of them separately, for eg. –the critical suite which must pass every single iteration, the “must-have” suite that must pass all major release iterations and the “nice to have” suite that can be run ad-hoc.

There are also movements out there that look at this differently. A case in point is the “Test First” approach – in some ways this turns the traditional build first, test later approach on its head. This approach proposes to have the tests in place first and use them to validate if the code that has been created achieves what it is supposed to – different from the approach where the tests are used to determine if anything is not working the way it should. Clearly the planning burden here is high – the test automation team has to be firmly integrated into the product planning process at the very start to be able to make this work. A lot of testing professionals have their eye on this interesting approach to see how it pans out.

The test automation case is not without challenges though – the chief being when the releases are coming so thick and fast which target code base do you base the automation suite on? The other recurring theme seems to be an incomplete strategy – many test automation plans stop at the automation of the unit tests. A more complete automation strategy that addresses unit tests, integration tests, systems tests and obviously regression tests would likely have a much greater chance of delivering the promised benefits. Key to addressing both these challenges seems to be the ability of the product leadership to integrate the test automation team into the early stages of the product design and planning cycle.

In closing let us accept that Test Automation seems to have a crucial role to play in Agile Development – like everything else in software engineering though it need to be approached in a considered and organized fashion. Wasn’t it Louis Srygley who said “Without requirements or design, programming is the art of adding bugs to an empty text file.”

Ensuring Success of Automated Software Testing

Have repetitive manual tasks escalated your budget and deadlines? Automated testing is solution. It is about executing repetitive test cases using software tools.
It requires combination of manual and automated testing to clear the bugs. As a report by NIST suggested, poor software quality hurts US economy by billions of dollars every year. A big chunk of these bug dollars can be regained by improving infrastructure for quality assurance.
How to Ensure Success of Automated Software Testing?
As the complexity and scale of software has increased, test automation has become an effective solution in software quality assurance.
Test automation makes sense in a scenario when there are several repetitive tests, frequent regression testing iterations, a large set of BVT cases and manual test execution cannot be relied on for critical functionality.

Success of automation testing depends to a large extent on the selection of testing tools/frameworks. It is for the team of testers to take into account various factors before choosing relevant automation tools. This one time exercise is an important one as it will influence the project big way in long run.

Criteria that needs to be considered before selecting any testing tool includes skilled resource to allocate for automation tasks, budget, testing needs, project environment and technology. Does the automation tool support all tools and objects used in the code? A tool failing to identify the objects used in the application may get you stuck for small tests.

Tool version used for the test development/development test must be stable. The vendor company must provide with appropriate customer support along with online help resources and user manual.

Tool learning curve is another important factor. Learning time of the tool must be acceptable for the goals. The automation tool is required only for a single project needs, or you are looking for a common tool for several projects. The tool chosen must support most of the coding languages on the projects.

Choose a quality automation tool that supports maximum testing types (Unit, functional, regression etc.) is always a better decision. Tool must also be robust enough to automate complex requirements.

The tool must also facilitate adequate reporting with graphical interface. Clear and concise reports help to conclude the test results quickly and efficacy.

Burgeoning Demand for Mobile Apps

Statistics is showing great demand for mobile test automation. As per the estimates of International Telecommunication Union, about 6.8 billion people have mobile subscription. This is an astonishing figure as it is 96% of the world population. An article recently published in Business Insider states 22% of the global population owns a smartphone.
Burgeoning Demand for Mobile Apps
Demand for mobile apps is also burgeoning with the increase in the number of mobile phones. However, before launching the app, you need to determine that the app is working on the desired devices in the market. As a range of mobile devices are available now, it is important to work with a company capable of developing apps with all the needed functions.

This is either accomplished through simulators or testing directly on device types such as Blackberries, iPhones and Androids so that the application’s function can be tested and monitored. A big advantage of this approach is that it saves time, money and energy for the originating company. It would help to find the errors, design flaws and bugs which may affect the overall marketability of the application. The testing program will create a spreadsheet or record of the problems thereby providing valuable information to the engineers and technicians that are trained and paid to analyze the data. This is certainly better than users falling on the errors.

The technician will work to resolve the outstanding issues, making sure that the functions work perfectly well. It would require the expertise of professionals having the knowledge and experience to get it done right the first time so that they can turn over a 100% bug free application back to the clients.

As a company that needs to work with this type of vendor it is essential to work with someone with solid reputation, is trustworthy and has competitive pricing so that you pay for quality and accuracy. Make the call today and get started working with a company that has the same high standards in automated mobile testing that you do!

Budget Allocation to Software Testing

Budget allocation to software testing are generally not trivial, but a minority component of overall budgets. PlanIT Testing Index 2011 reported reported a 19% allocation to testing and the figure has been exceptionally stable. Quality assurance was given the highest priority by the Banking and Finance sector, garnering the allocation of 39% of project budget. As for the overall allocation of budget, the highest proportion is earmarked for development activity.

The trend is to prefer automated testing in place of conventional manual testing that is tedious and time consuming. Testers will usually run the testing in the evening and return in the morning for analyzing the results. For the success of automation, selecting the right tool is imperative.

 

Automated testing keeps a check on the quality of the product right from the beginning, reducing the time spent for repetitive tasks. Once the automated frameworks are designed, tests written will continue to run for the lifetime of project with little maintenance.

When it comes to software testing you should only ever hire the best because this is a vital step that cannot be ignored and should not be bypassed if your desire is to market a viable application or program to the consumer. The testing process is where all the bugs, design flaws and code errors are found and corrected so that it will run according to design. If you, as a business owner, try to save money by going cheap on this you will end up paying higher in the long run.

 

Check out the website and speak with a customer representative today to find out how they can help your business achieve state-of-the-art programs and applications using their software testing tools . If you need several projects completed at the same time then ask about dedicated resources and their schedule to ensure that they can handle what you have to offer.

Emerging Trends in Software Testing To Look For

Competitive pressure and constant evolution keep improving the standards of quality assurance. Here are a few emerging trends in software testing to look for.

  1. Test Automation:
    Test automation is a big factor in improving efficiency of software. It may not completely replace the agility and creativity of manual testing, but it is certainly a quick way to cover bases throughout various phases of development. It also brings down the price of automation substantially down.
  2. Increasing use of mobile and cloud
    As the 2013-2014 World Quality Report suggests, the percentage of organizations using mobile testing jumped to 55 percent in 2013 from 31 percent in 2012. More mobile applications are relying on the cloud, making it even more important to test cloud-based systems.
  3. Security Testing:
    Security came close second to efficiency garnering 56 percent of the preferences. With the increased connectivity of information systems and devices, opportunities for hacking have gone up as well. Security will continue to be at top focus.
  4. Context-Driven Testing:
    Testers need to put to use various approaches through the product development. They will need to hone skills in context-driven testing, whether it is with both formal training or on-the-job observations. The most in-demand testers are those having an array of skills appropriate for many contexts. They would be able to interpret the skills required in a given situation.
  5. Centralized Testing:
    Organizations are moving towards transferring testing from development teams to a centralized testing team. Test Center of Excellence (TCOE) model identifies the tools and best practices to improve testing efficacy. More and more businesses are looking for IT partners with fully operational TCOE’s.
  6. Testing in Agile Development Environment:
    Best software testing companies are putting in efforts to build a sound testing approach fitting with the agile development methodology and use the right testing tools. Companies today need to get more focused on getting into the delivery phase quickly. Better testing model in agile environment will provide a constant flow of updates facilitating software development with respect to the needed features.

Automating web apps having AJAX with Selenium Webdriverwait

By – Divas Pandey

Initially when I started working on Web Automation the biggest challenge I faced was to create synchronization between speed of automation script execution & the response of the browser corresponding to the action preformed. Response of the browser can be fast/ slow, it’s normally slow due to a number of reasons like – Slow internet speed, Slow performance of the browser/ testing machine used etc. On analyzing the automation results we can see that maximum number of test cases FAIL due to the reason—element not found during the step execution. The solution to this problem of proper synchronization between automation speed and object presence is proper wait management. Selenium WebDriver provides various types of wait.

A simple scenario of synchronization is – suppose a button exists on a page and on clicking this button a new object should appear but this new object appears very late due to slow internet speed and as a result our test case fails. Here ‘Wait’ helps the user to troubleshoot issues while re-directing to different web pages by refreshing the entire web page and re-loading the new web elements.

“Dependent on several factors, including the OS/Browser combination, WebDriver may or may not wait for the page to load. In some circumstances, WebDriver may return control before the page has finished, or even started, loading. To ensure robustness, you need to wait for the element(s) to exist in the page using Explicit and Implicit Waits.”

We have 3 main types of wait.

  • Implicit Wait
  • Explicit Wait
  • Fluent Wait

 1) Implicit Wait:

The implicit wait remains alive for lifetime to the WebDriver object. In other words we can say that it is set for the entire duration of the web driver object. The implicit wait implementation will first ensure that element is available in DOM (Data Object Model) or not if not then it will wait for the element for a specific time to appear on webpage.

Once the specified time is over, it will try to search the element once again the last time before throwing exception.

Normally Implicit wait do the polling of DOM and every time when it does not find any element then it waits for that element for certain time and due to this execution of test becomes slow because implicit wait keep script waiting.Due to this people who are very sophisticated in writing selenium WebDriver code advise not to use it in script and for good script implicit wait should be avoided.

Example:

Here are the steps mentioned below to apply implicit wait.

Import  java.util.concurrent.TimeUnit package;

Create an Object of webdriver.

WebDriver driver=new FirefoxDriver();

Define the Implicit wait timeout.

driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);

importorg.openqa.selenium.*;

importorg.openqa.selenium.firefox.FirefoxDriver;

Class ImplicitWait_test

{

privateWebDriver driver;

public static void main(String[] str)

{

driver = new FirefoxDriver();

baseUrl = “http://www.wikipedia.org/”;

driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);

driver.get(URL for navigate….);

}

}

 2) ExplicitWait:

The difference between the explicit and implicit wait is implicit wait will be applied to all elements of test case by default while explicit will be applied to targeted element only.

Suppose there is a scenario, when a particular element takes more than a minute to load. In that case we would definitely not like to set a huge time to implicit wait, as if we do this, browser will go to wait for the same time for every element. In order to avoid such situation, Introduce a separate time on the required element only. By following this browser implicit wait time would be short for every element and it would be large for specific element.

There are two classes WebDriverWait and ExpectedConditions for this purpose.

Here are some Conditions of ExpectedConditions class are mentioned below:

alertIsPresent() : Is Alert Present?

elementSelectionStateToBe: Is the element selected?

elementToBeClickable: Is the element clickable?

elementToBeSelected: Element is selected

frameToBeAvailableAndSwitchToIt: Is frame available and selected?

invisibilityOfElementLocated: Is the element invisible?

presenceOfAllElementsLocatedBy: All elements presence location.

refreshed: Wait for a page refresh.

textToBePresentInElement: Is the text present for a particular element?

textToBePresentInElementValue: Is the element value present for a particular element?

visibilityOf: Is the element visible?

titleContains: Is that title contain?

Example:

  • Firstly Create an instance of webDriverWait.

WebDriverWait wait = new WebDriverWait(driver, time period);

time period : Here time value is given as input. How many seconds the driver has to wait is given here.

WebDriverWait wait = new WebDriverWait(driver, 30);

  • Use Until method with webdriverwait object.

wait.until(ExpectedConditions.Conditions(By.xpath(“xxxxxxxxxx”), ” XXXXXXXXXXXX”));

Here is a code given below for to wait for elements to become clickable.

Class ExplicitWait_test

{

Private WebDriver driver;

Public static void main(String[] str)

{

Driver=new firefoxDriver();

Driver.get(“URL to launch…”);

WebDriverWait wait = new WebDriverWait(driver, 10);

WebElement element = wait.until(ExpectedConditions.elementToBeClickable(By.id(“someid”)));

}

}

3) FluentWait:

Fluentwait defines a maximum amount of time to wait for a specific condition, and we can define a frequency with which we can check the condition.

We can implement the fluent wait in two ways. First one is using predicates and other one is using function. The difference between the two is the function can return any object or Boolean value but the predicate only returns a Boolean value. We can use any one of them as per our requirement.

To implement fluentwait we need to add guava jar with our project. Here I am explaining the example of fluent wait with the function and predicate.

Fluent Wait with Function:

A scenario for fluent wait with function-A button exists in a web page, and when user clicks on the button an alert modal appears in the page, here I am trying to verify that when I am clicking on the button, alert is present in the page or not. Here I mentioned a maximum amount of time is 30 seconds and polling time (frequency with which we can check the condition) is 3 seconds. When I will launch the page then it will wait for 30 seconds for the expected alert modal and after every 3 seconds it will look for the alert modal, and will print a message ‘Alert not present’ until it finds the alert on the page. If alert does not appear after 30 seconds it will throw alertnotpresent exception, if user clicks on the button in between 30 seconds it will accept the Alert modal.

In this code I have used function to implement the fluentwait, I am returning a Boolean value using function and we can also return an element here.

Alert alert=null;

Alert a=w.switchTo().alert();

Wait<Alert> w111=new FluentWait<Alert>(a).withTimeout(30, TimeUnit.SECONDS).pollingEvery(3,TimeUnit.SECONDS).ignoring(TimeoutException.class);

w111.until(new Function<Alert, Boolean>()

{

@Override

public Boolean apply(Alert arg0)

{

Boolean result=false;

if(arg0!=null)

{

arg0.accept();

result=true;

}

else

{

System.out.println(“Alert not present”);

result=false;

}

return result;

}

});

 

Fluent wait with predicate:

A scenario for fluent wait with predicate – Herea button exists in a web page having ID :‘popup_container’, and when user clicks on the button an HTML popup appears in the page, here I am trying to verify that when I click on the button a pop-up appears on the page. Maximum waiting time and pooling time are same as above example.

When I will launch the page then it will wait for 30 seconds for the expected popup and after every 3 seconds it will look for the popupwindow, until alert does not present it will print Element is not present…’ after every 3 seconds. If modal does not appear after 30 seconds it will throw Elementnotpresent exception, if user clicks on the button in between 30 seconds it will print ‘got it!!!!! element is present on the page….’.

FluentWait<WebDriver>fw=new FluentWait<WebDriver>(w).withTimeout(30,TimeUnit.SECONDS).pollingEvery(3,TimeUnit.SECONDS).ignoring(ElementNotFoundException.class);

fw.until(new Predicate<WebDriver>()

{

Boolean result=false;

@Override

publicboolean apply(WebDriver arg0)

{

WebElement e;

e=getElement(arg0,”id”,”popup_container”);

if(e!=null)

{

System.out.println(“got it!!!!! element is present on the page….”);

result=true;

}

else

{

System.out.println(“Element is not present…”);

}

return result;

}

});

Keep looking for our blog section for more on automating web apps.

Understanding the scope of Smoke testing and Sanity testing

By-Manu Kanwar
The terms sanity testing and smoke testing are used interchangeably in many instances, despite the fact that they do not mean the same. There may be some similarities between the two testing methods, but there are also differences that set them apart from each other.

Smoke Testing:

Smoke testing usually means testing that a program launches and its interfaces are available.  If the smoke test fails, you can’t do the sanity test. When a program has many external dependencies, smoke testing may find problems with them.

In Smoke testing, just the basic functionalities are tested, without going in for the detailed functional testing. Thus, it is shallow and wide. With smoke testing, requirement specification documents are rarely taken into consideration. The objective of Smoke testing is to check the application stability before starting the thorough testing.

Sanity Testing:

Sanity testing is ordinarily the next level after smoke testing. In sanity testing you test that the application is generally working, without going into great detail.
Sanity testing is mostly done after a product has already seen a few releases or versions. In some cases, a few basic test cases in a specific area are combined into a single sanity test case that will test working of functionality in that specific area of the product.
Sanity testing will be deep and narrow and the tester will need to refer to specific requirements. The objective of Sanity testing is to check the application rationality before starting the thorough testing.

Are smoke and sanity testing different?

In some organizations smoke testing is also known as Build Verification Test (BVT) as this ensures that the new build is not broken before starting the actual testing phase.
When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing, sanity is performed on that build. You can say that sanity testing is a subset of regression testing.

Important Points:

  1. Both smoke and sanity tests can be executed manually or using an automation tool.  When automated tools are used, the tests are often initiated by the same process that generates the build itself.
  2. As per the needs of testing, you may have to execute both Sanity and Smoke Tests on the software build. In such cases you will first execute Smoke tests and then go ahead with Sanity Testing. In industry, test cases for Sanity Testing are commonly combined with that for smoke tests, to speed up test execution. Hence it’s a common that the terms are often confused and used interchangeably.

Visit us at www.thinksys.com , drop us an email at – [email protected] for connecting with us.

Can Exploratory Testing Be Automated ?

By-Michael Bolton

In our earlier blog, Simran wrote about the benefits of Ad Hoc Testing and how important it is. This week we are bringing Michael’s thoughts on whether or not Exploratory Testing can be automated. We at ThinkSys believe in making QA Automation fundamental to increasing productivity and decreasing development cycles. There are (at least) two ways to interpret and answer that question.

Let’s look first at answering the literal version of the question, by looking at Cem Kaner’s definition of exploratory testing:

Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.

If we take this definition of exploratory testing, we see that it’s not a thing that a person does, so much as a way that a person does it. An exploratory approach emphasizes the individual tester, and his/her freedom and responsibility. The definition identifies design, interpretation, and learning as key elements of an exploratory approach. None of these are things that we associate with machines or automation, except in terms of automation as a medium in the McLuhan sense: an extension (or enablement, or enhancement, or acceleration, or intensification) of human capabilities. The machine to a great degree handles the execution part, but the work in getting the machine to do it is governed by exploratory—not scripted—work.

Which brings us to the second way of looking at the question: can an exploratory approach include automation? The answer there is absolutely Yes.

Some people might have a problem with the idea, because of a parsimonious view of what test automation is, or does. To some, test automation is “getting the machine to perform the test”. I call that checking. I prefer to think of test automation in terms of what we say in the Rapid Software Testing course: test automation is any use of tools to support testing.

If yes then up to what extent? While I do exploration (investigation) on a product, I do whatever comes to my mind by thinking in reverse direction as how this piece of functionality would break? I am not sure if my approach is correct but so far it’s been working for me.

That’s certainly one way of applying the idea. Note that when you think in a reverse direction, you’re not following a script. “Thinking backwards” isn’t an algorithm; it’s a heuristic approach that you apply and that you interact with. Yet there’s more to test automation than breaking. I like your use of “investigation”, which to me suggests that you can use automation in any way to assist learning something about the program.

A while ago, I developed a program to be used in our testing classes. I developed that program test-first, creating some examples of input that it should accept and process, and input that it should reject. That was an exploratory process, in that I designed, executed, and interpreted unit checks, and I learned. It was also an automated process, to the degree that the execution of the checks and the aggregating and reporting of results was handled by the test framework. I used the result of each test, each set of checks, to inform both my design of the next check and the design of the program. So let me state this clearly:

Test-driven development is an exploratory process.

The running of the checks is not an exploratory process; that’s entirely scripted. But the design of the checks, the interpretation of the checks, the learning derived from the checks, the looping back into more design or coding of either program code or test code, or of interactive tests that don’t rely on automation so much: that’s all exploratory stuff.

The program that I wrote is a kind of puzzle that requires class participants to test and reverse-engineer what the program does. That’s an exploratory process; there aren’t scripted approaches to reverse engineering something, because the first unexpected piece of information derails the script. In work-shopping this program with colleagues, one in particular—James Lyndsay—got curious about something that he saw. Curiosity can’t be automated. He decided to generate some test values to refine what he had discovered in earlier exploration. Sapient decisions can’t be automated. He used Excel, which is a powerful test automation tool, when you use it to support testing. He invented a couple of formulas. Invention can’t be automated. The formulas allowed Excel to generate a great big table. The actual generation of the data can be automated. He took that data from Excel, and used the Windows clipboard to throw the data against the input mechanism of the puzzle. Sending the output of one program to the input of another can be automated. The puzzle, as I wrote it, generates a log file automatically. Output logging can be automated. James noticed the logs without me telling him about them. Noticing can’t be automated. Since the program had just put out 256 lines of output, James scanned it with his eyes, looking for patterns in the output. Looking for specific patterns and noticing them can’t be automated unless and until you know what to look for. BUT automation can help to reveal hitherto unnoticed patterns by changing the context of your observation. James decided that the output he was observing was very interesting. Deciding whether something is interesting can’t be automated. James could have filtered the output by grepping for other instance of that pattern. Searching for a pattern, using regular expressions, is something that can be automated. James instead decided that a visual scan was fast enough and valuable enough for the task at hand. Evaluation of cost and value, and making decisions about them, can’t be automated. He discovered the answer to the puzzle that I had expressed in the program… and he identified results that blew my mind—ways in which the program was interpreting data in a way that was entirely correct, but far beyond my model of what I thought the program did.

Learning can’t be automated. Yet there is no way that we would have learned this so quickly without automation. The automation didn’t do the exploration on its own; instead, it super-charged our exploration. There were no automated checks in the testing that we did, so no automation in the record-and-playback sense, no automation in the expected/predicted result sense. Since then, I’ve done much more investigation of that seemingly simple puzzle, in which I’ve fed back what I’ve learned into more testing, using variations on James’ technique to explore the input and output space a lot more. And I’ve discovered that the program is far more complex than I could have imagined.

So: is that automating exploratory testing? I don’t think so. Is that using automation to assist an exploratory process? Absolutely.

Republished with permission from (http://www.developsense.com/blog/2010/09/can-exploratory-testing-be-automated/) , by Michael Bolton.  Republication of this work is not intended as an endorsement of ThinkSys’s services by Michael Bolton or DevelopSense.

Emerging Trends in Software Testing and Quality Assurance

Customer expectations are higher than ever when it comes to software quality, so testing becomes even more important. Quality assurance has steadily evolved through the years. Here are a few emerging trends in testing and quality assurance.

Test automation: Quality test automation can contribute a lot to efficiency.It may not substitute the creativity brought in by manual testing, but it does help in making things quicker and more accurate.

Testing mobile and cloud-based systems: Cloud usage has grown manifold in recent times. As the 2013-2014 World Quality Report suggests, the percentage of enterprises using mobile testing grew from 31 percent in 2012 to 55 percent in 2013, and the graph continues to climb.

More emphasis on security:As a survey for the World Quality Report indicates, efficiency and performance constitute the primary focus for mobile testing, at 59 percent, followed closely by security with 56 percent. With the threat of hacking ubiquitous, security is sure to remain in focus.

Context-driven testing:Software testers can no longer follow the same standard procedure on all projects. Rather, they have to follow context-driven testing approach. They have to earn an array of skills and the ability to interpret which skill to use in a given situation.

Moving to testing center of excellence model: The model deals with tool identification and the best practices for strengthening efficacy of tests. The testing process is transferred from development teams to a centralized testing team.

ThinkSys, one of the leading USA based software testing companies, has kept pace with the changing scenario in software testing. We have separate departments for software development and QA testing, which results in increased efficiency and cost-efficacy. Our experts keep a close watch on the changing scenario, making sure that we keep moving with the stream. Moreover, we acclimatize to the client demands by streamlining the QA structure to improve cost optimization as well as better accuracy.

Is Ad Hoc testing reliable?

By :-Simran Puri

What is Ad Hoc Testing?

Performing random testing without any plan is known as Ad Hoc Testing.  It is also referred to as Random Testing or Monkey Testing. This type of testing doesn’t follow any designed pattern or plan for the activity. The testing steps and the scenarios totally depend upon the tester, and defects are found by random checking.

Ad Hoc Testing does have its own benefits:

  • A totally informal approach, it provides an opportunity for discovery, allowing the tester to find missing cases and scenarios that might not be included in the test plan(if a test plan exists).
  • The tester can really immerse him / herself in the role of the end-user, performing tests absent of any boundaries or preconceived ideas.
  • The approach can be implemented easily, without any documents or planning.

That said, while Ad Hoc Testing is certainly useful, a tester shouldn’t rely on it solely. For a project following scrum methodology, for example, a tester focused only on the requirements and who performs Ad Hoc testing for rest of the modules of the project(apart from the requirements) will likely ignore some important areas and miss testing other very important scenarios.
When utilizing an Ad Hoc Testing methodology, a tester may attempt to cover all the scenarios and areas but will likely still end up missing a number of them. There is always a risk that the tester performs the same or similar tests multiple times while other important functionality is broken and ends up not being tested at all. This is because Ad Hoc Testing does not require all the major risk areas to been covered.

Performing Testing on the Basis of Test Plan

Test cases serve as a guide for the testers. The testing steps, areas and scenarios are defined, and the tester is supposed to follow the outlined approach to perform testing. If the test plan is efficient, it covers most of the major functionality and scenarios and there is a  low risk of missing critical bugs.
On the other hand, a test plan can limit the tester’s boundaries. There is less of an opportunity to find bugs that exist outside of the defined scenarios. Or perhaps time constraints limit the tester’s ability to execute the complete test suite.
So, while Ad Hoc Testing is not sufficient on its own, combining the Ad Hoc approach with a solid test plan will strengthen the results. By performing the test per the test plan while at the same time devoting resource to Ad Hoc testing, a test team will gain better coverage and lower the risk of missing critical bugs. Also, the defects found through Ad Hoc testing can be included in future test plans so that those defect prone areas and scenarios can be tested in a later release.

Additionally, in the case where time constraints limit the test team’s ability to execute the complete test suite, the major functionality can still be defined and documented. The tester can then use these guidelines while testing to ensure that these major areas and functionalities have been tested. And after this is done, Ad Hoc testing can continue to be performed on these and other areas.

ThinkSys Announces Cal Hacks Sponsorship, First Major Collegiate Hackathon In The San Francisco Bay Area

Sunnyvale, CA, September 26, 2014

ThinkSys Inc, a global technology company focused on software development, e-commerce, QA, and QA automation services, is proud to announce its sponsorship of Cal Hacks, the first major collegiate hackathon in the San Francisco Bay Area.

ThinkSys is a longtime advocate of the hackathon concept,” says Leslie Sarandah, Vice President of Sales and Marketing. “Our executives have championed hackathons as a way to encourage motivated teams of developers to break away from their day-to-day responsibilities and work in teams on projects of their own design. This type of activity generates some amazing customer-centric innovations in a very short period of time.”

Alexander Kern, Director of Cal Hacks states “The hackathon attracts natural problem solvers. We expect this event to bring together some of the brightest students in their fields to direct their energies toward complex problems, and with their solutions ultimately being judged by leaders in the technology industry. We are really excited to see the results.”

Cal Hacks will take place October 3 – 5 at U. C. Berkeley’s Cal Memorial Stadium. The event will bring together hundreds of undergraduate innovators, coders, and hackers from around the world to create incredible software and hardware projects. This collaborative experience offers invaluable connections, mentorship and teambuilding that will benefit participants today and in the future. The event will last 36 hours and is free to accepted participants.

About ThinkSys Inc
ThinkSys, a global technology products & services company, helps customers improve and grow their business and e-commerce initiatives across the Web and mobile channels. Employing over 120 technology specialists, ThinkSys develops, tests and implements high-quality, cost-effective solutions running in the cloud or on premise. As a leader in web and mobile manual and test automation and monitoring solutions, using its Krypton framework or other Industry tools, ThinkSys enables developers, QA Professionals and management to help reduce time to market. ThinkSys is privately held and is headquartered in Sunnyvale, CA. For more information visit ThinkSys.com

About Cal Hacks
Cal Hacks is the first major collegiate hackathon to take place in the San Francisco Bay Area. Additional sponsors of the event include Microsoft, Google, Dropbox and Facebook. For more information and a complete list of sponsors, go to calhacks.io