ThinkSys Announces Its Platinum Sponsorship Of STARWEST Techwell Event, California

8/9/2015 September
ThinkSys, a boutique company delivering excellent, cost-effective and efficient IT solutions and testing services, announced that it is a platinum sponsor of STARWEST Techwell Event, happening at Anaheim, California in September. ThinkSys plans to launch Krypton, an innovative Regression Automation Testing Framework at the conference.

Over the last many years, ThinkSys has helped several Enterprises and ISVs across the world build quality software while reducing the cost of quality and ensuring improved time to market. ThinkSys has used its experience in building Krypton. Krypton, the low-cost automation solution, is ideal for testing websites, web-based applications, mobile websites, and mobile native apps.

STARWEST is the premier event for software testers and quality assurance professionals. With keynote sessions by thought-leaders in quality assurance and software testing, tutorials, several conference sessions covering crucial aspects of testing, training classes, and Test Lab, this is a must-attend event for every professional in quality assurance.

Rajiv Jain, the CEO of ThinkSys will be representing ThinkSys at the conference. Speaking on this occasion, Rajiv said, “Every software development involves frequent testing and that’s when companies are increasingly turning to test automation. Using the Krypton Framework, which we plan to launch at the conference, companies can make automation testing easy, reliable, and fast. It will also allow the managers to, better leverage the existing QA skills in a more productive way.”

Rajiv Jain will be speaking at the conference on ‘Why do QA Test Automation Projects Fail?’ on Wednesday, September 30, 2015 at 3:00 PM. This interactive session will throw some light on the practical aspects which organizations need to take care of while implementing their test automation strategy.

Meet the ThinkSys team at the Expo at booth number 35.

About ThinkSys Inc
ThinkSys, a global technology products & services company, helps customers improve and grow their business and e-commerce initiatives across the Web and mobile channels. ThinkSys develops, tests and implements high-quality, cost-effective solutions running in the cloud or on-premise. As a leader in web and mobile manual and test automation, performance and monitoring solutions, using its Krypton framework or other Industry tools, ThinkSys enables developers, QA Professionals, and management to help reduce time to market. ThinkSys is privately held and is headquartered in Sunnyvale, CA. For more information visit http://www.thinksys.com.

Characteristics of an Ace Test Automation Suite

“In some situations, the most important objective of testing is to find as many important bugs as possible. In other situations, finding bugs is not important at all. In yet other situations, bug-finding is only one of a number of important objectives. The wise test professional knows which situation she is in. “- Rex Black.

Characteristics of an Ace Test Automation Suite
There is no longer any need to make the case for Test Automation – the obvious value proposition has ensured that now software development projects in general and product development in particular always includes an allowance for test automation. The question really is what can be done to improve the chances of success of your own test automation efforts? What characteristics should a comprehensive Test Automation suite possess?
Architecture: Remember that the automation suite is, to all intents and purposes, a software product and hence its architecture is of prime importance. The best architecture emphasizes methodology, manageability and maintainability of the suite. The test methodology, essentially how the testing will be carried out, is more important than the technology of the day that will go into creating the suite. The product being tested will keep evolving, especially in these days of continuous delivery, so the suite has to be easy to update and scale.
Process: The success of a test automation strategy is highly dependent on how well the process is organized, including management of the test process and management of the tests themselves. The first implies a tight integration with the business. There is a need to be conscious of the issues the software product or project is looking to address. Efficient and effective involvement of business stake holders, users and auditors will become key.
Trackability: Among the top reasons to consider automation is making repetitive tests faster & easier. In these cases, chances are, you would be running the same tests against many devices or under many environments. A great test automation suite will ensure you are always able to keep track of exactly how the automation is faring – essentially give full visibility into what the automating configuration and compatibility testing achieving at all times.
Capability: In a nutshell the aim of test automation is to achieve more test coverage in a shorter time while reducing the chances of human error. That being said not all tests are the same – since you can never really achieve 100% test automation which tests should a great test automation suite prioritize?

  • Traditional wisdom has been a great test automation suite should help to automate the routine tasks like smoke tests and regressions tests – the rationale is sound.
  • Our view is that a test automation suite should also seek to extend the possibilities of normal testing – in many ways this suggests that an outstanding test automation suite will be one that taken on more than is possible with manual testing – a suite that helps execute those test cases that are difficult to execute manually.
  • We have already mentioned cross-platform test cases like different OS’s, browsers and platforms. These are great tests to try to automate given that they need to be performed repeatedly – a fit case for automation.
  • We have spoken of how the road to success lies in ensuring the test automation is integrated into the business logic. This suggests that a test automation suite that effectively automates the testing of complex business logic would be another fit case for automation.

Boris Beizer said “More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.” That’s an onerous load for testing in general and test automation in particular to bear but the best test automation suites out there have that capability – happy testing!

Test Automation – At Home in an Agile Environment

“A ‘passing’ test doesn’t mean ‘no problem.’ It means no problem ‘observed’. This time. With these inputs. So far. On my machine.” – Michael Bolton
In 2013 as many as 88% of the organizations responding to a Version One “State of Agile Development” survey confirmed that they were practicing agile development and that number has only gone up since. Agile, obviously, is defined by an approach of short sprints, iterative development and short release cycles. Given the apparent time pressure on the test cycles testing expert Bolton’s tune would ring true to many in software product development today (sorry couldn’t resist the lame pun). The objective is faster testing and more code coverage so less “technical debt” is passed on. So what’s the way out? Many have considered test automation to be the answer.

Test Automation – At Home in an Agile Environment

This Image is available courtesy of maisasolutions.com

The faster cycles in Agile development mean the time available to test is shorter – an excellent case for automating the testing. Each successive release also means more features added and hence more code to be tested – more test cases to be covered in the same or less time. This would be practically impossible to do without automation. The iterative development approach also means a need for more robust regression tests to check that new releases don’t break stuff already fixed in previous versions – again a strong case for a well put together automation suite. So, that seems to be quite categorical – agile product development absolutely needs test automation.

It seems important, thus, to start at the beginning and make test automation a consideration when the product is being designed – essentially design tests when the product and its features are being designed. This would allow for the test automation strategy to be based on what the product is expected to do rather than on specific iterations of the code. This could also allow designing automated tests that test at layers below the GUI and such which are impacted more at each iteration.

Assuming that test automation has the benefit of having been part of the product planning an Agile, read iterative, approach can also work in building a complete regression suite. Essentially this would mean building automation only for those features carried over into the current version from the previous version. The focus would be on those features that have become stable. Over the course of a few sprints as the features add up the automation of their unit tests would too leading to a regression suite that offers more or less complete coverage. A practical variant of this method is to divide the creation of the suite into parts & approach each of them separately, for eg. –the critical suite which must pass every single iteration, the “must-have” suite that must pass all major release iterations and the “nice to have” suite that can be run ad-hoc.

There are also movements out there that look at this differently. A case in point is the “Test First” approach – in some ways this turns the traditional build first, test later approach on its head. This approach proposes to have the tests in place first and use them to validate if the code that has been created achieves what it is supposed to – different from the approach where the tests are used to determine if anything is not working the way it should. Clearly the planning burden here is high – the test automation team has to be firmly integrated into the product planning process at the very start to be able to make this work. A lot of testing professionals have their eye on this interesting approach to see how it pans out.

The test automation case is not without challenges though – the chief being when the releases are coming so thick and fast which target code base do you base the automation suite on? The other recurring theme seems to be an incomplete strategy – many test automation plans stop at the automation of the unit tests. A more complete automation strategy that addresses unit tests, integration tests, systems tests and obviously regression tests would likely have a much greater chance of delivering the promised benefits. Key to addressing both these challenges seems to be the ability of the product leadership to integrate the test automation team into the early stages of the product design and planning cycle.

In closing let us accept that Test Automation seems to have a crucial role to play in Agile Development – like everything else in software engineering though it need to be approached in a considered and organized fashion. Wasn’t it Louis Srygley who said “Without requirements or design, programming is the art of adding bugs to an empty text file.”

Ensuring Success of Automated Software Testing

Have repetitive manual tasks escalated your budget and deadlines? Automated testing is solution. It is about executing repetitive test cases using software tools.
It requires combination of manual and automated testing to clear the bugs. As a report by NIST suggested, poor software quality hurts US economy by billions of dollars every year. A big chunk of these bug dollars can be regained by improving infrastructure for quality assurance.
How to Ensure Success of Automated Software Testing?
As the complexity and scale of software has increased, test automation has become an effective solution in software quality assurance.
Test automation makes sense in a scenario when there are several repetitive tests, frequent regression testing iterations, a large set of BVT cases and manual test execution cannot be relied on for critical functionality.

Success of automation testing depends to a large extent on the selection of testing tools/frameworks. It is for the team of testers to take into account various factors before choosing relevant automation tools. This one time exercise is an important one as it will influence the project big way in long run.

Criteria that needs to be considered before selecting any testing tool includes skilled resource to allocate for automation tasks, budget, testing needs, project environment and technology. Does the automation tool support all tools and objects used in the code? A tool failing to identify the objects used in the application may get you stuck for small tests.

Tool version used for the test development/development test must be stable. The vendor company must provide with appropriate customer support along with online help resources and user manual.

Tool learning curve is another important factor. Learning time of the tool must be acceptable for the goals. The automation tool is required only for a single project needs, or you are looking for a common tool for several projects. The tool chosen must support most of the coding languages on the projects.

Choose a quality automation tool that supports maximum testing types (Unit, functional, regression etc.) is always a better decision. Tool must also be robust enough to automate complex requirements.

The tool must also facilitate adequate reporting with graphical interface. Clear and concise reports help to conclude the test results quickly and efficacy.

Burgeoning Demand for Mobile Apps

Statistics is showing great demand for mobile test automation. As per the estimates of International Telecommunication Union, about 6.8 billion people have mobile subscription. This is an astonishing figure as it is 96% of the world population. An article recently published in Business Insider states 22% of the global population owns a smartphone.
Burgeoning Demand for Mobile Apps
Demand for mobile apps is also burgeoning with the increase in the number of mobile phones. However, before launching the app, you need to determine that the app is working on the desired devices in the market. As a range of mobile devices are available now, it is important to work with a company capable of developing apps with all the needed functions.

This is either accomplished through simulators or testing directly on device types such as Blackberries, iPhones and Androids so that the application’s function can be tested and monitored. A big advantage of this approach is that it saves time, money and energy for the originating company. It would help to find the errors, design flaws and bugs which may affect the overall marketability of the application. The testing program will create a spreadsheet or record of the problems thereby providing valuable information to the engineers and technicians that are trained and paid to analyze the data. This is certainly better than users falling on the errors.

The technician will work to resolve the outstanding issues, making sure that the functions work perfectly well. It would require the expertise of professionals having the knowledge and experience to get it done right the first time so that they can turn over a 100% bug free application back to the clients.

As a company that needs to work with this type of vendor it is essential to work with someone with solid reputation, is trustworthy and has competitive pricing so that you pay for quality and accuracy. Make the call today and get started working with a company that has the same high standards in automated mobile testing that you do!

Budget Allocation to Software Testing

Budget allocation to software testing are generally not trivial, but a minority component of overall budgets. PlanIT Testing Index 2011 reported reported a 19% allocation to testing and the figure has been exceptionally stable. Quality assurance was given the highest priority by the Banking and Finance sector, garnering the allocation of 39% of project budget. As for the overall allocation of budget, the highest proportion is earmarked for development activity.

The trend is to prefer automated testing in place of conventional manual testing that is tedious and time consuming. Testers will usually run the testing in the evening and return in the morning for analyzing the results. For the success of automation, selecting the right tool is imperative.

 

Automated testing keeps a check on the quality of the product right from the beginning, reducing the time spent for repetitive tasks. Once the automated frameworks are designed, tests written will continue to run for the lifetime of project with little maintenance.

When it comes to software testing you should only ever hire the best because this is a vital step that cannot be ignored and should not be bypassed if your desire is to market a viable application or program to the consumer. The testing process is where all the bugs, design flaws and code errors are found and corrected so that it will run according to design. If you, as a business owner, try to save money by going cheap on this you will end up paying higher in the long run.

 

Check out the website and speak with a customer representative today to find out how they can help your business achieve state-of-the-art programs and applications using their software testing tools . If you need several projects completed at the same time then ask about dedicated resources and their schedule to ensure that they can handle what you have to offer.

Emerging Trends in Software Testing To Look For

Competitive pressure and constant evolution keep improving the standards of quality assurance. Here are a few emerging trends in software testing to look for.

  1. Test Automation:
    Test automation is a big factor in improving efficiency of software. It may not completely replace the agility and creativity of manual testing, but it is certainly a quick way to cover bases throughout various phases of development. It also brings down the price of automation substantially down.
  2. Increasing use of mobile and cloud
    As the 2013-2014 World Quality Report suggests, the percentage of organizations using mobile testing jumped to 55 percent in 2013 from 31 percent in 2012. More mobile applications are relying on the cloud, making it even more important to test cloud-based systems.
  3. Security Testing:
    Security came close second to efficiency garnering 56 percent of the preferences. With the increased connectivity of information systems and devices, opportunities for hacking have gone up as well. Security will continue to be at top focus.
  4. Context-Driven Testing:
    Testers need to put to use various approaches through the product development. They will need to hone skills in context-driven testing, whether it is with both formal training or on-the-job observations. The most in-demand testers are those having an array of skills appropriate for many contexts. They would be able to interpret the skills required in a given situation.
  5. Centralized Testing:
    Organizations are moving towards transferring testing from development teams to a centralized testing team. Test Center of Excellence (TCOE) model identifies the tools and best practices to improve testing efficacy. More and more businesses are looking for IT partners with fully operational TCOE’s.
  6. Testing in Agile Development Environment:
    Best software testing companies are putting in efforts to build a sound testing approach fitting with the agile development methodology and use the right testing tools. Companies today need to get more focused on getting into the delivery phase quickly. Better testing model in agile environment will provide a constant flow of updates facilitating software development with respect to the needed features.

Automating web apps having AJAX with Selenium Webdriverwait

By – Divas Pandey

Initially when I started working on Web Automation the biggest challenge I faced was to create synchronization between speed of automation script execution & the response of the browser corresponding to the action preformed. Response of the browser can be fast/ slow, it’s normally slow due to a number of reasons like – Slow internet speed, Slow performance of the browser/ testing machine used etc. On analyzing the automation results we can see that maximum number of test cases FAIL due to the reason—element not found during the step execution. The solution to this problem of proper synchronization between automation speed and object presence is proper wait management. Selenium WebDriver provides various types of wait.

A simple scenario of synchronization is – suppose a button exists on a page and on clicking this button a new object should appear but this new object appears very late due to slow internet speed and as a result our test case fails. Here ‘Wait’ helps the user to troubleshoot issues while re-directing to different web pages by refreshing the entire web page and re-loading the new web elements.

“Dependent on several factors, including the OS/Browser combination, WebDriver may or may not wait for the page to load. In some circumstances, WebDriver may return control before the page has finished, or even started, loading. To ensure robustness, you need to wait for the element(s) to exist in the page using Explicit and Implicit Waits.”

We have 3 main types of wait.

  • Implicit Wait
  • Explicit Wait
  • Fluent Wait

 1) Implicit Wait:

The implicit wait remains alive for lifetime to the WebDriver object. In other words we can say that it is set for the entire duration of the web driver object. The implicit wait implementation will first ensure that element is available in DOM (Data Object Model) or not if not then it will wait for the element for a specific time to appear on webpage.

Once the specified time is over, it will try to search the element once again the last time before throwing exception.

Normally Implicit wait do the polling of DOM and every time when it does not find any element then it waits for that element for certain time and due to this execution of test becomes slow because implicit wait keep script waiting.Due to this people who are very sophisticated in writing selenium WebDriver code advise not to use it in script and for good script implicit wait should be avoided.

Example:

Here are the steps mentioned below to apply implicit wait.

Import  java.util.concurrent.TimeUnit package;

Create an Object of webdriver.

WebDriver driver=new FirefoxDriver();

Define the Implicit wait timeout.

driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);

importorg.openqa.selenium.*;

importorg.openqa.selenium.firefox.FirefoxDriver;

Class ImplicitWait_test

{

privateWebDriver driver;

public static void main(String[] str)

{

driver = new FirefoxDriver();

baseUrl = “http://www.wikipedia.org/”;

driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);

driver.get(URL for navigate….);

}

}

 2) ExplicitWait:

The difference between the explicit and implicit wait is implicit wait will be applied to all elements of test case by default while explicit will be applied to targeted element only.

Suppose there is a scenario, when a particular element takes more than a minute to load. In that case we would definitely not like to set a huge time to implicit wait, as if we do this, browser will go to wait for the same time for every element. In order to avoid such situation, Introduce a separate time on the required element only. By following this browser implicit wait time would be short for every element and it would be large for specific element.

There are two classes WebDriverWait and ExpectedConditions for this purpose.

Here are some Conditions of ExpectedConditions class are mentioned below:

alertIsPresent() : Is Alert Present?

elementSelectionStateToBe: Is the element selected?

elementToBeClickable: Is the element clickable?

elementToBeSelected: Element is selected

frameToBeAvailableAndSwitchToIt: Is frame available and selected?

invisibilityOfElementLocated: Is the element invisible?

presenceOfAllElementsLocatedBy: All elements presence location.

refreshed: Wait for a page refresh.

textToBePresentInElement: Is the text present for a particular element?

textToBePresentInElementValue: Is the element value present for a particular element?

visibilityOf: Is the element visible?

titleContains: Is that title contain?

Example:

  • Firstly Create an instance of webDriverWait.

WebDriverWait wait = new WebDriverWait(driver, time period);

time period : Here time value is given as input. How many seconds the driver has to wait is given here.

WebDriverWait wait = new WebDriverWait(driver, 30);

  • Use Until method with webdriverwait object.

wait.until(ExpectedConditions.Conditions(By.xpath(“xxxxxxxxxx”), ” XXXXXXXXXXXX”));

Here is a code given below for to wait for elements to become clickable.

Class ExplicitWait_test

{

Private WebDriver driver;

Public static void main(String[] str)

{

Driver=new firefoxDriver();

Driver.get(“URL to launch…”);

WebDriverWait wait = new WebDriverWait(driver, 10);

WebElement element = wait.until(ExpectedConditions.elementToBeClickable(By.id(“someid”)));

}

}

3) FluentWait:

Fluentwait defines a maximum amount of time to wait for a specific condition, and we can define a frequency with which we can check the condition.

We can implement the fluent wait in two ways. First one is using predicates and other one is using function. The difference between the two is the function can return any object or Boolean value but the predicate only returns a Boolean value. We can use any one of them as per our requirement.

To implement fluentwait we need to add guava jar with our project. Here I am explaining the example of fluent wait with the function and predicate.

Fluent Wait with Function:

A scenario for fluent wait with function-A button exists in a web page, and when user clicks on the button an alert modal appears in the page, here I am trying to verify that when I am clicking on the button, alert is present in the page or not. Here I mentioned a maximum amount of time is 30 seconds and polling time (frequency with which we can check the condition) is 3 seconds. When I will launch the page then it will wait for 30 seconds for the expected alert modal and after every 3 seconds it will look for the alert modal, and will print a message ‘Alert not present’ until it finds the alert on the page. If alert does not appear after 30 seconds it will throw alertnotpresent exception, if user clicks on the button in between 30 seconds it will accept the Alert modal.

In this code I have used function to implement the fluentwait, I am returning a Boolean value using function and we can also return an element here.

Alert alert=null;

Alert a=w.switchTo().alert();

Wait<Alert> w111=new FluentWait<Alert>(a).withTimeout(30, TimeUnit.SECONDS).pollingEvery(3,TimeUnit.SECONDS).ignoring(TimeoutException.class);

w111.until(new Function<Alert, Boolean>()

{

@Override

public Boolean apply(Alert arg0)

{

Boolean result=false;

if(arg0!=null)

{

arg0.accept();

result=true;

}

else

{

System.out.println(“Alert not present”);

result=false;

}

return result;

}

});

 

Fluent wait with predicate:

A scenario for fluent wait with predicate – Herea button exists in a web page having ID :‘popup_container’, and when user clicks on the button an HTML popup appears in the page, here I am trying to verify that when I click on the button a pop-up appears on the page. Maximum waiting time and pooling time are same as above example.

When I will launch the page then it will wait for 30 seconds for the expected popup and after every 3 seconds it will look for the popupwindow, until alert does not present it will print Element is not present…’ after every 3 seconds. If modal does not appear after 30 seconds it will throw Elementnotpresent exception, if user clicks on the button in between 30 seconds it will print ‘got it!!!!! element is present on the page….’.

FluentWait<WebDriver>fw=new FluentWait<WebDriver>(w).withTimeout(30,TimeUnit.SECONDS).pollingEvery(3,TimeUnit.SECONDS).ignoring(ElementNotFoundException.class);

fw.until(new Predicate<WebDriver>()

{

Boolean result=false;

@Override

publicboolean apply(WebDriver arg0)

{

WebElement e;

e=getElement(arg0,”id”,”popup_container”);

if(e!=null)

{

System.out.println(“got it!!!!! element is present on the page….”);

result=true;

}

else

{

System.out.println(“Element is not present…”);

}

return result;

}

});

Keep looking for our blog section for more on automating web apps.

Understanding the scope of Smoke testing and Sanity testing

By-Manu Kanwar
The terms sanity testing and smoke testing are used interchangeably in many instances, despite the fact that they do not mean the same. There may be some similarities between the two testing methods, but there are also differences that set them apart from each other.

Smoke Testing:

Smoke testing usually means testing that a program launches and its interfaces are available.  If the smoke test fails, you can’t do the sanity test. When a program has many external dependencies, smoke testing may find problems with them.

In Smoke testing, just the basic functionalities are tested, without going in for the detailed functional testing. Thus, it is shallow and wide. With smoke testing, requirement specification documents are rarely taken into consideration. The objective of Smoke testing is to check the application stability before starting the thorough testing.

Sanity Testing:

Sanity testing is ordinarily the next level after smoke testing. In sanity testing you test that the application is generally working, without going into great detail.
Sanity testing is mostly done after a product has already seen a few releases or versions. In some cases, a few basic test cases in a specific area are combined into a single sanity test case that will test working of functionality in that specific area of the product.
Sanity testing will be deep and narrow and the tester will need to refer to specific requirements. The objective of Sanity testing is to check the application rationality before starting the thorough testing.

Are smoke and sanity testing different?

In some organizations smoke testing is also known as Build Verification Test (BVT) as this ensures that the new build is not broken before starting the actual testing phase.
When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing, sanity is performed on that build. You can say that sanity testing is a subset of regression testing.

Important Points:

  1. Both smoke and sanity tests can be executed manually or using an automation tool.  When automated tools are used, the tests are often initiated by the same process that generates the build itself.
  2. As per the needs of testing, you may have to execute both Sanity and Smoke Tests on the software build. In such cases you will first execute Smoke tests and then go ahead with Sanity Testing. In industry, test cases for Sanity Testing are commonly combined with that for smoke tests, to speed up test execution. Hence it’s a common that the terms are often confused and used interchangeably.

Visit us at www.thinksys.com , drop us an email at – [email protected] for connecting with us.

Can Exploratory Testing Be Automated ?

By-Michael Bolton

In our earlier blog, Simran wrote about the benefits of Ad Hoc Testing and how important it is. This week we are bringing Michael’s thoughts on whether or not Exploratory Testing can be automated. We at ThinkSys believe in making QA Automation fundamental to increasing productivity and decreasing development cycles. There are (at least) two ways to interpret and answer that question.

Let’s look first at answering the literal version of the question, by looking at Cem Kaner’s definition of exploratory testing:

Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.

If we take this definition of exploratory testing, we see that it’s not a thing that a person does, so much as a way that a person does it. An exploratory approach emphasizes the individual tester, and his/her freedom and responsibility. The definition identifies design, interpretation, and learning as key elements of an exploratory approach. None of these are things that we associate with machines or automation, except in terms of automation as a medium in the McLuhan sense: an extension (or enablement, or enhancement, or acceleration, or intensification) of human capabilities. The machine to a great degree handles the execution part, but the work in getting the machine to do it is governed by exploratory—not scripted—work.

Which brings us to the second way of looking at the question: can an exploratory approach include automation? The answer there is absolutely Yes.

Some people might have a problem with the idea, because of a parsimonious view of what test automation is, or does. To some, test automation is “getting the machine to perform the test”. I call that checking. I prefer to think of test automation in terms of what we say in the Rapid Software Testing course: test automation is any use of tools to support testing.

If yes then up to what extent? While I do exploration (investigation) on a product, I do whatever comes to my mind by thinking in reverse direction as how this piece of functionality would break? I am not sure if my approach is correct but so far it’s been working for me.

That’s certainly one way of applying the idea. Note that when you think in a reverse direction, you’re not following a script. “Thinking backwards” isn’t an algorithm; it’s a heuristic approach that you apply and that you interact with. Yet there’s more to test automation than breaking. I like your use of “investigation”, which to me suggests that you can use automation in any way to assist learning something about the program.

A while ago, I developed a program to be used in our testing classes. I developed that program test-first, creating some examples of input that it should accept and process, and input that it should reject. That was an exploratory process, in that I designed, executed, and interpreted unit checks, and I learned. It was also an automated process, to the degree that the execution of the checks and the aggregating and reporting of results was handled by the test framework. I used the result of each test, each set of checks, to inform both my design of the next check and the design of the program. So let me state this clearly:

Test-driven development is an exploratory process.

The running of the checks is not an exploratory process; that’s entirely scripted. But the design of the checks, the interpretation of the checks, the learning derived from the checks, the looping back into more design or coding of either program code or test code, or of interactive tests that don’t rely on automation so much: that’s all exploratory stuff.

The program that I wrote is a kind of puzzle that requires class participants to test and reverse-engineer what the program does. That’s an exploratory process; there aren’t scripted approaches to reverse engineering something, because the first unexpected piece of information derails the script. In work-shopping this program with colleagues, one in particular—James Lyndsay—got curious about something that he saw. Curiosity can’t be automated. He decided to generate some test values to refine what he had discovered in earlier exploration. Sapient decisions can’t be automated. He used Excel, which is a powerful test automation tool, when you use it to support testing. He invented a couple of formulas. Invention can’t be automated. The formulas allowed Excel to generate a great big table. The actual generation of the data can be automated. He took that data from Excel, and used the Windows clipboard to throw the data against the input mechanism of the puzzle. Sending the output of one program to the input of another can be automated. The puzzle, as I wrote it, generates a log file automatically. Output logging can be automated. James noticed the logs without me telling him about them. Noticing can’t be automated. Since the program had just put out 256 lines of output, James scanned it with his eyes, looking for patterns in the output. Looking for specific patterns and noticing them can’t be automated unless and until you know what to look for. BUT automation can help to reveal hitherto unnoticed patterns by changing the context of your observation. James decided that the output he was observing was very interesting. Deciding whether something is interesting can’t be automated. James could have filtered the output by grepping for other instance of that pattern. Searching for a pattern, using regular expressions, is something that can be automated. James instead decided that a visual scan was fast enough and valuable enough for the task at hand. Evaluation of cost and value, and making decisions about them, can’t be automated. He discovered the answer to the puzzle that I had expressed in the program… and he identified results that blew my mind—ways in which the program was interpreting data in a way that was entirely correct, but far beyond my model of what I thought the program did.

Learning can’t be automated. Yet there is no way that we would have learned this so quickly without automation. The automation didn’t do the exploration on its own; instead, it super-charged our exploration. There were no automated checks in the testing that we did, so no automation in the record-and-playback sense, no automation in the expected/predicted result sense. Since then, I’ve done much more investigation of that seemingly simple puzzle, in which I’ve fed back what I’ve learned into more testing, using variations on James’ technique to explore the input and output space a lot more. And I’ve discovered that the program is far more complex than I could have imagined.

So: is that automating exploratory testing? I don’t think so. Is that using automation to assist an exploratory process? Absolutely.

Republished with permission from (http://www.developsense.com/blog/2010/09/can-exploratory-testing-be-automated/) , by Michael Bolton.  Republication of this work is not intended as an endorsement of ThinkSys’s services by Michael Bolton or DevelopSense.

Emerging Trends in Software Testing and Quality Assurance

Customer expectations are higher than ever when it comes to software quality, so testing becomes even more important. Quality assurance has steadily evolved through the years. Here are a few emerging trends in testing and quality assurance.

Test automation: Quality test automation can contribute a lot to efficiency.It may not substitute the creativity brought in by manual testing, but it does help in making things quicker and more accurate.

Testing mobile and cloud-based systems: Cloud usage has grown manifold in recent times. As the 2013-2014 World Quality Report suggests, the percentage of enterprises using mobile testing grew from 31 percent in 2012 to 55 percent in 2013, and the graph continues to climb.

More emphasis on security:As a survey for the World Quality Report indicates, efficiency and performance constitute the primary focus for mobile testing, at 59 percent, followed closely by security with 56 percent. With the threat of hacking ubiquitous, security is sure to remain in focus.

Context-driven testing:Software testers can no longer follow the same standard procedure on all projects. Rather, they have to follow context-driven testing approach. They have to earn an array of skills and the ability to interpret which skill to use in a given situation.

Moving to testing center of excellence model: The model deals with tool identification and the best practices for strengthening efficacy of tests. The testing process is transferred from development teams to a centralized testing team.

ThinkSys, one of the leading USA based software testing companies, has kept pace with the changing scenario in software testing. We have separate departments for software development and QA testing, which results in increased efficiency and cost-efficacy. Our experts keep a close watch on the changing scenario, making sure that we keep moving with the stream. Moreover, we acclimatize to the client demands by streamlining the QA structure to improve cost optimization as well as better accuracy.

Is Ad Hoc testing reliable?

By :-Simran Puri

What is Ad Hoc Testing?

Performing random testing without any plan is known as Ad Hoc Testing.  It is also referred to as Random Testing or Monkey Testing. This type of testing doesn’t follow any designed pattern or plan for the activity. The testing steps and the scenarios totally depend upon the tester, and defects are found by random checking.

Ad Hoc Testing does have its own benefits:

  • A totally informal approach, it provides an opportunity for discovery, allowing the tester to find missing cases and scenarios that might not be included in the test plan(if a test plan exists).
  • The tester can really immerse him / herself in the role of the end-user, performing tests absent of any boundaries or preconceived ideas.
  • The approach can be implemented easily, without any documents or planning.

That said, while Ad Hoc Testing is certainly useful, a tester shouldn’t rely on it solely. For a project following scrum methodology, for example, a tester focused only on the requirements and who performs Ad Hoc testing for rest of the modules of the project(apart from the requirements) will likely ignore some important areas and miss testing other very important scenarios.
When utilizing an Ad Hoc Testing methodology, a tester may attempt to cover all the scenarios and areas but will likely still end up missing a number of them. There is always a risk that the tester performs the same or similar tests multiple times while other important functionality is broken and ends up not being tested at all. This is because Ad Hoc Testing does not require all the major risk areas to been covered.

Performing Testing on the Basis of Test Plan

Test cases serve as a guide for the testers. The testing steps, areas and scenarios are defined, and the tester is supposed to follow the outlined approach to perform testing. If the test plan is efficient, it covers most of the major functionality and scenarios and there is a  low risk of missing critical bugs.
On the other hand, a test plan can limit the tester’s boundaries. There is less of an opportunity to find bugs that exist outside of the defined scenarios. Or perhaps time constraints limit the tester’s ability to execute the complete test suite.
So, while Ad Hoc Testing is not sufficient on its own, combining the Ad Hoc approach with a solid test plan will strengthen the results. By performing the test per the test plan while at the same time devoting resource to Ad Hoc testing, a test team will gain better coverage and lower the risk of missing critical bugs. Also, the defects found through Ad Hoc testing can be included in future test plans so that those defect prone areas and scenarios can be tested in a later release.

Additionally, in the case where time constraints limit the test team’s ability to execute the complete test suite, the major functionality can still be defined and documented. The tester can then use these guidelines while testing to ensure that these major areas and functionalities have been tested. And after this is done, Ad Hoc testing can continue to be performed on these and other areas.

ThinkSys Announces Cal Hacks Sponsorship, First Major Collegiate Hackathon In The San Francisco Bay Area

Sunnyvale, CA, September 26, 2014

ThinkSys Inc, a global technology company focused on software development, e-commerce, QA, and QA automation services, is proud to announce its sponsorship of Cal Hacks, the first major collegiate hackathon in the San Francisco Bay Area.

ThinkSys is a longtime advocate of the hackathon concept,” says Leslie Sarandah, Vice President of Sales and Marketing. “Our executives have championed hackathons as a way to encourage motivated teams of developers to break away from their day-to-day responsibilities and work in teams on projects of their own design. This type of activity generates some amazing customer-centric innovations in a very short period of time.”

Alexander Kern, Director of Cal Hacks states “The hackathon attracts natural problem solvers. We expect this event to bring together some of the brightest students in their fields to direct their energies toward complex problems, and with their solutions ultimately being judged by leaders in the technology industry. We are really excited to see the results.”

Cal Hacks will take place October 3 – 5 at U. C. Berkeley’s Cal Memorial Stadium. The event will bring together hundreds of undergraduate innovators, coders, and hackers from around the world to create incredible software and hardware projects. This collaborative experience offers invaluable connections, mentorship and teambuilding that will benefit participants today and in the future. The event will last 36 hours and is free to accepted participants.

About ThinkSys Inc
ThinkSys, a global technology products & services company, helps customers improve and grow their business and e-commerce initiatives across the Web and mobile channels. Employing over 120 technology specialists, ThinkSys develops, tests and implements high-quality, cost-effective solutions running in the cloud or on premise. As a leader in web and mobile manual and test automation and monitoring solutions, using its Krypton framework or other Industry tools, ThinkSys enables developers, QA Professionals and management to help reduce time to market. ThinkSys is privately held and is headquartered in Sunnyvale, CA. For more information visit ThinkSys.com

About Cal Hacks
Cal Hacks is the first major collegiate hackathon to take place in the San Francisco Bay Area. Additional sponsors of the event include Microsoft, Google, Dropbox and Facebook. For more information and a complete list of sponsors, go to calhacks.io

4th Annual Selenium Conference 2014

By– Rajnikant Jha

The 4th annual meeting of the Selenium Conference was held for the first time in India (Bangalore)September – 6, 2014. With 400 attendees from across the globe, around 20 speakers and the Selenium Committee members,this event delivered tremendous expertise and information. I would highly recommend this to anyone who is working in Selenium or related technologies.

Selenium is most popular software testing framework for web applications today. The increasing trend of Selenium users shows the popularity of Selenium. Job trend in Industry shows rise in job related to Selenium automation in last 3-4 years. Graph below shows Job trend compared to QTP as percentage of related jobs

 

Starting with the Welcome Address given by Simon Stewart, the creator of the WebDriver open source web application testing tool and a core Selenium 2 developer, this conference was rich in information, valuable tips and smart people who had multiple years of experience in Selenium. The event’s Lightening Talks track gave the attendees an opportunity to discuss issues and questions with each other as well as the Selenium Committee members. Some of the key items worth noting are as follows:

  • Selenium 3.0 which was to be released last year in Dec 2013 will be released in a couple of months.
  • With the Selenium 3.0 launch, Selenium RC has been officially deprecated, so the companies using RC should switch to WebDriver.
  • Selenium 4.0 is also on schedule and will probably be released by year-end. It will standardize to W3Ctoo.
  • Selenium IDE is being superseded by Selenium Builder.
  • Selenium Grid will have a video recording functionality enhancement.

My questions mainly related to the stability of WebDriver will be addressed in the next releases of Selenium.

I also received good response to some of the work-arounds that we are doing at ThinkSys to improve script stability and to handle some browser quirks. For example:

  • Moving mouse to origin before the test script starts.
  • IE browser setting to run automation
  • Using Firefox profile with Selenium
  • Handling Chrome Crashes

There was so much good information at the conference that will help the community to design, maintain and execute test scripts using Selenium. Some of my favorite talks and presentations included:

1)      Perils of Page Object Pattern – by Anand Bagmar

A Page Object Pattern models pages within the test code which reduces duplicity in code and provides better maintainability with one place change. Most of the WebDriver scripts and frameworks use Page Object Pattern. The talk explained Page Object Pattern through the example of Amazon application and code samples. From the discussion and code samples we understood following limitations of Page Object Pattern:

  • Test intent gets polluted
  • Duplication of implementation
  • Maintenance challenges
  • Scaling challenges

Does that mean we should not use Page Object Pattern? We should always use this for the benefits it has but it should be created for the business layer of application. The ideal Test Automation pyramid should have two different types of test automation – one for technology tests which include unit tests and integration tests, and another for business tests which include UI, functional and regression tests. This pyramid helps to understand the test intent in business terminology. The test intent is most important in Page Object Pattern.

Business Layer Page Object Pattern with test intent of business layer automation helps to design correct pattern and it has following advantages:

  • Effective automation scripts for business requirements
  • Abstraction layer allows separation of concerns
  • Maintenance and scaling becomes easier

2)      Scaling and Managing Selenium Grid– by Dima Kovalenko

There are three main topics:

  • Stability
  • Speed
  • Coverage

The presentation talked about all three topics and covered the main concerns that Selenium users face when using Selenium Grid. The presentation offered a number of suggestions that can be used to stabilize execution using Hub and nodes. The key points regarding stability were:

  • Move as much to Linux as possible
  • Run one test case at a time
  • Use better mechanisms like Crons for OS and Nodes configs
  • Use WebDriver instead of RC
  • Create schedule tasks to periodically restart Grid node and computer
  • Use batch file for IE which clean up cookies and cache and launches Internet Explorer

For speed optimization, use of smaller nodes with single browsers is recommended. For cost effectiveness, the use of low-end machines should be considered.

For testing coverage purposes, we may have to use browsers like IE7 and IE8, but these browsers take more maintenance time. Browsers like IE9 and beyond are more stable and recommendable. If we have to use Safari, it is better to use one Safari browser per machine. There are some other features that we may consider for stability of execution on nodes:

  • Automatically Set IE Protected Security Zone each reboot
  • Kill web browsers after test
  • Automatically update drivers and jars

Overall, I found the conference educational and motivating. It was great to be surrounded by other technical people in my industry all sharing their knowledge about this technology. I hope to be there at the next conference and encourage others in the Selenium space to join me!

Creating Effective Bugs

By – Shraddha Pande
A skilled QA tester knows that the most important part of the role is perhaps the ability to create effective bugs. A bug is not useful to the testing process if it is not reproducible and properly documented. Developers rely on clear and understandable bug reports to pinpoint what needs to be fixed.Thus, it is critical that these reports and the identified bugs capture all of the necessary data and criteria.

An effective bug must have these qualities:

    • Easily Reproducible:

The basic feature of a bug report is that it must be easily reproduced. For this it should have these:

  1. Title: The bug title should be a one-line accurate description.
  2. Steps: The steps to reproduce the bug must be few, clear and relevant.
  3. Summary: The actual and expected results must be descriptive enough so that the developer has a clear understanding of the problem. The expected results must describe precisely what needs to be fixed.
  4. Additional help: Whenever possible, attach a screenshot or video of the bug to the bug report to give the developers a more complete picture of the bug scenario.
  5. Platforms affected: Check the bug in all possible environments. For example, in website testing, one would run the scenario with different operating systems, browsers and mobile devices (versions and platforms) to reproduce the bug in different environments.
  • Severity and Priority:

The bug found should be labeled with the Severity (Critical, Major, Normal, Minor, Trivial and Enhancement) it can cause to the application as well as the Priority (High, Medium or Low) in which it has to be fixed.

  • Not a Duplicate:

The bug should be checked with the other bugs tracked to avoid duplication.

  • Deferrable or Not Deferrable:

Internal testers should also check the bug to ascertain if it can be fixed in the next build release.
After these steps are completed, the QA engineer checks all of the above features, discusses the bug found with the testing lead and development team and then, finally, creates the bug.

Conclusion

While the overall process outlined here is the basis of effective bug production, never underestimate the importance of good communication skills in the successful documentation and verbal explanations of the issues. A knowledgeable and respectful dialogue between QA and development leads to greater understanding of the issues and a stronger end product.
Visit us at www.thinksys.com , drop us an email at – [email protected] for connecting with us.