Budget Allocation to Software Testing

Budget allocation to software testing are generally not trivial, but a minority component of overall budgets. PlanIT Testing Index 2011 reported reported a 19% allocation to testing and the figure has been exceptionally stable. Quality assurance was given the highest priority by the Banking and Finance sector, garnering the allocation of 39% of project budget. As for the overall allocation of budget, the highest proportion is earmarked for development activity.

The trend is to prefer automated testing in place of conventional manual testing that is tedious and time consuming. Testers will usually run the testing in the evening and return in the morning for analyzing the results. For the success of automation, selecting the right tool is imperative.

 

Automated testing keeps a check on the quality of the product right from the beginning, reducing the time spent for repetitive tasks. Once the automated frameworks are designed, tests written will continue to run for the lifetime of project with little maintenance.

When it comes to software testing you should only ever hire the best because this is a vital step that cannot be ignored and should not be bypassed if your desire is to market a viable application or program to the consumer. The testing process is where all the bugs, design flaws and code errors are found and corrected so that it will run according to design. If you, as a business owner, try to save money by going cheap on this you will end up paying higher in the long run.

 

Check out the website and speak with a customer representative today to find out how they can help your business achieve state-of-the-art programs and applications using their software testing tools . If you need several projects completed at the same time then ask about dedicated resources and their schedule to ensure that they can handle what you have to offer.

Emerging Trends in Software Testing To Look For

Competitive pressure and constant evolution keep improving the standards of quality assurance. Here are a few emerging trends in software testing to look for.

  1. Test Automation:
    Test automation is a big factor in improving efficiency of software. It may not completely replace the agility and creativity of manual testing, but it is certainly a quick way to cover bases throughout various phases of development. It also brings down the price of automation substantially down.
  2. Increasing use of mobile and cloud
    As the 2013-2014 World Quality Report suggests, the percentage of organizations using mobile testing jumped to 55 percent in 2013 from 31 percent in 2012. More mobile applications are relying on the cloud, making it even more important to test cloud-based systems.
  3. Security Testing:
    Security came close second to efficiency garnering 56 percent of the preferences. With the increased connectivity of information systems and devices, opportunities for hacking have gone up as well. Security will continue to be at top focus.
  4. Context-Driven Testing:
    Testers need to put to use various approaches through the product development. They will need to hone skills in context-driven testing, whether it is with both formal training or on-the-job observations. The most in-demand testers are those having an array of skills appropriate for many contexts. They would be able to interpret the skills required in a given situation.
  5. Centralized Testing:
    Organizations are moving towards transferring testing from development teams to a centralized testing team. Test Center of Excellence (TCOE) model identifies the tools and best practices to improve testing efficacy. More and more businesses are looking for IT partners with fully operational TCOE’s.
  6. Testing in Agile Development Environment:
    Best software testing companies are putting in efforts to build a sound testing approach fitting with the agile development methodology and use the right testing tools. Companies today need to get more focused on getting into the delivery phase quickly. Better testing model in agile environment will provide a constant flow of updates facilitating software development with respect to the needed features.

Automating web apps having AJAX with Selenium Webdriverwait

By – Divas Pandey

Initially when I started working on Web Automation the biggest challenge I faced was to create synchronization between speed of automation script execution & the response of the browser corresponding to the action preformed. Response of the browser can be fast/ slow, it’s normally slow due to a number of reasons like – Slow internet speed, Slow performance of the browser/ testing machine used etc. On analyzing the automation results we can see that maximum number of test cases FAIL due to the reason—element not found during the step execution. The solution to this problem of proper synchronization between automation speed and object presence is proper wait management. Selenium WebDriver provides various types of wait.

A simple scenario of synchronization is – suppose a button exists on a page and on clicking this button a new object should appear but this new object appears very late due to slow internet speed and as a result our test case fails. Here ‘Wait’ helps the user to troubleshoot issues while re-directing to different web pages by refreshing the entire web page and re-loading the new web elements.

“Dependent on several factors, including the OS/Browser combination, WebDriver may or may not wait for the page to load. In some circumstances, WebDriver may return control before the page has finished, or even started, loading. To ensure robustness, you need to wait for the element(s) to exist in the page using Explicit and Implicit Waits.”

We have 3 main types of wait.

  • Implicit Wait
  • Explicit Wait
  • Fluent Wait

 1) Implicit Wait:

The implicit wait remains alive for lifetime to the WebDriver object. In other words we can say that it is set for the entire duration of the web driver object. The implicit wait implementation will first ensure that element is available in DOM (Data Object Model) or not if not then it will wait for the element for a specific time to appear on webpage.

Once the specified time is over, it will try to search the element once again the last time before throwing exception.

Normally Implicit wait do the polling of DOM and every time when it does not find any element then it waits for that element for certain time and due to this execution of test becomes slow because implicit wait keep script waiting.Due to this people who are very sophisticated in writing selenium WebDriver code advise not to use it in script and for good script implicit wait should be avoided.

Example:

Here are the steps mentioned below to apply implicit wait.

Import  java.util.concurrent.TimeUnit package;

Create an Object of webdriver.

WebDriver driver=new FirefoxDriver();

Define the Implicit wait timeout.

driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);

importorg.openqa.selenium.*;

importorg.openqa.selenium.firefox.FirefoxDriver;

Class ImplicitWait_test

{

privateWebDriver driver;

public static void main(String[] str)

{

driver = new FirefoxDriver();

baseUrl = “http://www.wikipedia.org/”;

driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);

driver.get(URL for navigate….);

}

}

 2) ExplicitWait:

The difference between the explicit and implicit wait is implicit wait will be applied to all elements of test case by default while explicit will be applied to targeted element only.

Suppose there is a scenario, when a particular element takes more than a minute to load. In that case we would definitely not like to set a huge time to implicit wait, as if we do this, browser will go to wait for the same time for every element. In order to avoid such situation, Introduce a separate time on the required element only. By following this browser implicit wait time would be short for every element and it would be large for specific element.

There are two classes WebDriverWait and ExpectedConditions for this purpose.

Here are some Conditions of ExpectedConditions class are mentioned below:

alertIsPresent() : Is Alert Present?

elementSelectionStateToBe: Is the element selected?

elementToBeClickable: Is the element clickable?

elementToBeSelected: Element is selected

frameToBeAvailableAndSwitchToIt: Is frame available and selected?

invisibilityOfElementLocated: Is the element invisible?

presenceOfAllElementsLocatedBy: All elements presence location.

refreshed: Wait for a page refresh.

textToBePresentInElement: Is the text present for a particular element?

textToBePresentInElementValue: Is the element value present for a particular element?

visibilityOf: Is the element visible?

titleContains: Is that title contain?

Example:

  • Firstly Create an instance of webDriverWait.

WebDriverWait wait = new WebDriverWait(driver, time period);

time period : Here time value is given as input. How many seconds the driver has to wait is given here.

WebDriverWait wait = new WebDriverWait(driver, 30);

  • Use Until method with webdriverwait object.

wait.until(ExpectedConditions.Conditions(By.xpath(“xxxxxxxxxx”), ” XXXXXXXXXXXX”));

Here is a code given below for to wait for elements to become clickable.

Class ExplicitWait_test

{

Private WebDriver driver;

Public static void main(String[] str)

{

Driver=new firefoxDriver();

Driver.get(“URL to launch…”);

WebDriverWait wait = new WebDriverWait(driver, 10);

WebElement element = wait.until(ExpectedConditions.elementToBeClickable(By.id(“someid”)));

}

}

3) FluentWait:

Fluentwait defines a maximum amount of time to wait for a specific condition, and we can define a frequency with which we can check the condition.

We can implement the fluent wait in two ways. First one is using predicates and other one is using function. The difference between the two is the function can return any object or Boolean value but the predicate only returns a Boolean value. We can use any one of them as per our requirement.

To implement fluentwait we need to add guava jar with our project. Here I am explaining the example of fluent wait with the function and predicate.

Fluent Wait with Function:

A scenario for fluent wait with function-A button exists in a web page, and when user clicks on the button an alert modal appears in the page, here I am trying to verify that when I am clicking on the button, alert is present in the page or not. Here I mentioned a maximum amount of time is 30 seconds and polling time (frequency with which we can check the condition) is 3 seconds. When I will launch the page then it will wait for 30 seconds for the expected alert modal and after every 3 seconds it will look for the alert modal, and will print a message ‘Alert not present’ until it finds the alert on the page. If alert does not appear after 30 seconds it will throw alertnotpresent exception, if user clicks on the button in between 30 seconds it will accept the Alert modal.

In this code I have used function to implement the fluentwait, I am returning a Boolean value using function and we can also return an element here.

Alert alert=null;

Alert a=w.switchTo().alert();

Wait<Alert> w111=new FluentWait<Alert>(a).withTimeout(30, TimeUnit.SECONDS).pollingEvery(3,TimeUnit.SECONDS).ignoring(TimeoutException.class);

w111.until(new Function<Alert, Boolean>()

{

@Override

public Boolean apply(Alert arg0)

{

Boolean result=false;

if(arg0!=null)

{

arg0.accept();

result=true;

}

else

{

System.out.println(“Alert not present”);

result=false;

}

return result;

}

});

 

Fluent wait with predicate:

A scenario for fluent wait with predicate – Herea button exists in a web page having ID :‘popup_container’, and when user clicks on the button an HTML popup appears in the page, here I am trying to verify that when I click on the button a pop-up appears on the page. Maximum waiting time and pooling time are same as above example.

When I will launch the page then it will wait for 30 seconds for the expected popup and after every 3 seconds it will look for the popupwindow, until alert does not present it will print Element is not present…’ after every 3 seconds. If modal does not appear after 30 seconds it will throw Elementnotpresent exception, if user clicks on the button in between 30 seconds it will print ‘got it!!!!! element is present on the page….’.

FluentWait<WebDriver>fw=new FluentWait<WebDriver>(w).withTimeout(30,TimeUnit.SECONDS).pollingEvery(3,TimeUnit.SECONDS).ignoring(ElementNotFoundException.class);

fw.until(new Predicate<WebDriver>()

{

Boolean result=false;

@Override

publicboolean apply(WebDriver arg0)

{

WebElement e;

e=getElement(arg0,”id”,”popup_container”);

if(e!=null)

{

System.out.println(“got it!!!!! element is present on the page….”);

result=true;

}

else

{

System.out.println(“Element is not present…”);

}

return result;

}

});

Keep looking for our blog section for more on automating web apps.

Understanding the scope of Smoke testing and Sanity testing

By-Manu Kanwar
The terms sanity testing and smoke testing are used interchangeably in many instances, despite the fact that they do not mean the same. There may be some similarities between the two testing methods, but there are also differences that set them apart from each other.

Smoke Testing:

Smoke testing usually means testing that a program launches and its interfaces are available.  If the smoke test fails, you can’t do the sanity test. When a program has many external dependencies, smoke testing may find problems with them.

In Smoke testing, just the basic functionalities are tested, without going in for the detailed functional testing. Thus, it is shallow and wide. With smoke testing, requirement specification documents are rarely taken into consideration. The objective of Smoke testing is to check the application stability before starting the thorough testing.

Sanity Testing:

Sanity testing is ordinarily the next level after smoke testing. In sanity testing you test that the application is generally working, without going into great detail.
Sanity testing is mostly done after a product has already seen a few releases or versions. In some cases, a few basic test cases in a specific area are combined into a single sanity test case that will test working of functionality in that specific area of the product.
Sanity testing will be deep and narrow and the tester will need to refer to specific requirements. The objective of Sanity testing is to check the application rationality before starting the thorough testing.

Are smoke and sanity testing different?

In some organizations smoke testing is also known as Build Verification Test (BVT) as this ensures that the new build is not broken before starting the actual testing phase.
When there are some minor issues with software and a new build is obtained after fixing the issues then instead of doing complete regression testing, sanity is performed on that build. You can say that sanity testing is a subset of regression testing.

Important Points:

  1. Both smoke and sanity tests can be executed manually or using an automation tool.  When automated tools are used, the tests are often initiated by the same process that generates the build itself.
  2. As per the needs of testing, you may have to execute both Sanity and Smoke Tests on the software build. In such cases you will first execute Smoke tests and then go ahead with Sanity Testing. In industry, test cases for Sanity Testing are commonly combined with that for smoke tests, to speed up test execution. Hence it’s a common that the terms are often confused and used interchangeably.

Visit us at www.thinksys.com , drop us an email at – [email protected] for connecting with us.

Can Exploratory Testing Be Automated ?

By-Michael Bolton

In our earlier blog, Simran wrote about the benefits of Ad Hoc Testing and how important it is. This week we are bringing Michael’s thoughts on whether or not Exploratory Testing can be automated. We at ThinkSys believe in making QA Automation fundamental to increasing productivity and decreasing development cycles. There are (at least) two ways to interpret and answer that question.

Let’s look first at answering the literal version of the question, by looking at Cem Kaner’s definition of exploratory testing:

Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.

If we take this definition of exploratory testing, we see that it’s not a thing that a person does, so much as a way that a person does it. An exploratory approach emphasizes the individual tester, and his/her freedom and responsibility. The definition identifies design, interpretation, and learning as key elements of an exploratory approach. None of these are things that we associate with machines or automation, except in terms of automation as a medium in the McLuhan sense: an extension (or enablement, or enhancement, or acceleration, or intensification) of human capabilities. The machine to a great degree handles the execution part, but the work in getting the machine to do it is governed by exploratory—not scripted—work.

Which brings us to the second way of looking at the question: can an exploratory approach include automation? The answer there is absolutely Yes.

Some people might have a problem with the idea, because of a parsimonious view of what test automation is, or does. To some, test automation is “getting the machine to perform the test”. I call that checking. I prefer to think of test automation in terms of what we say in the Rapid Software Testing course: test automation is any use of tools to support testing.

If yes then up to what extent? While I do exploration (investigation) on a product, I do whatever comes to my mind by thinking in reverse direction as how this piece of functionality would break? I am not sure if my approach is correct but so far it’s been working for me.

That’s certainly one way of applying the idea. Note that when you think in a reverse direction, you’re not following a script. “Thinking backwards” isn’t an algorithm; it’s a heuristic approach that you apply and that you interact with. Yet there’s more to test automation than breaking. I like your use of “investigation”, which to me suggests that you can use automation in any way to assist learning something about the program.

A while ago, I developed a program to be used in our testing classes. I developed that program test-first, creating some examples of input that it should accept and process, and input that it should reject. That was an exploratory process, in that I designed, executed, and interpreted unit checks, and I learned. It was also an automated process, to the degree that the execution of the checks and the aggregating and reporting of results was handled by the test framework. I used the result of each test, each set of checks, to inform both my design of the next check and the design of the program. So let me state this clearly:

Test-driven development is an exploratory process.

The running of the checks is not an exploratory process; that’s entirely scripted. But the design of the checks, the interpretation of the checks, the learning derived from the checks, the looping back into more design or coding of either program code or test code, or of interactive tests that don’t rely on automation so much: that’s all exploratory stuff.

The program that I wrote is a kind of puzzle that requires class participants to test and reverse-engineer what the program does. That’s an exploratory process; there aren’t scripted approaches to reverse engineering something, because the first unexpected piece of information derails the script. In work-shopping this program with colleagues, one in particular—James Lyndsay—got curious about something that he saw. Curiosity can’t be automated. He decided to generate some test values to refine what he had discovered in earlier exploration. Sapient decisions can’t be automated. He used Excel, which is a powerful test automation tool, when you use it to support testing. He invented a couple of formulas. Invention can’t be automated. The formulas allowed Excel to generate a great big table. The actual generation of the data can be automated. He took that data from Excel, and used the Windows clipboard to throw the data against the input mechanism of the puzzle. Sending the output of one program to the input of another can be automated. The puzzle, as I wrote it, generates a log file automatically. Output logging can be automated. James noticed the logs without me telling him about them. Noticing can’t be automated. Since the program had just put out 256 lines of output, James scanned it with his eyes, looking for patterns in the output. Looking for specific patterns and noticing them can’t be automated unless and until you know what to look for. BUT automation can help to reveal hitherto unnoticed patterns by changing the context of your observation. James decided that the output he was observing was very interesting. Deciding whether something is interesting can’t be automated. James could have filtered the output by grepping for other instance of that pattern. Searching for a pattern, using regular expressions, is something that can be automated. James instead decided that a visual scan was fast enough and valuable enough for the task at hand. Evaluation of cost and value, and making decisions about them, can’t be automated. He discovered the answer to the puzzle that I had expressed in the program… and he identified results that blew my mind—ways in which the program was interpreting data in a way that was entirely correct, but far beyond my model of what I thought the program did.

Learning can’t be automated. Yet there is no way that we would have learned this so quickly without automation. The automation didn’t do the exploration on its own; instead, it super-charged our exploration. There were no automated checks in the testing that we did, so no automation in the record-and-playback sense, no automation in the expected/predicted result sense. Since then, I’ve done much more investigation of that seemingly simple puzzle, in which I’ve fed back what I’ve learned into more testing, using variations on James’ technique to explore the input and output space a lot more. And I’ve discovered that the program is far more complex than I could have imagined.

So: is that automating exploratory testing? I don’t think so. Is that using automation to assist an exploratory process? Absolutely.

Republished with permission from (http://www.developsense.com/blog/2010/09/can-exploratory-testing-be-automated/) , by Michael Bolton.  Republication of this work is not intended as an endorsement of ThinkSys’s services by Michael Bolton or DevelopSense.

Emerging Trends in Software Testing and Quality Assurance

Customer expectations are higher than ever when it comes to software quality, so testing becomes even more important. Quality assurance has steadily evolved through the years. Here are a few emerging trends in testing and quality assurance.

Test automation: Quality test automation can contribute a lot to efficiency.It may not substitute the creativity brought in by manual testing, but it does help in making things quicker and more accurate.

Testing mobile and cloud-based systems: Cloud usage has grown manifold in recent times. As the 2013-2014 World Quality Report suggests, the percentage of enterprises using mobile testing grew from 31 percent in 2012 to 55 percent in 2013, and the graph continues to climb.

More emphasis on security:As a survey for the World Quality Report indicates, efficiency and performance constitute the primary focus for mobile testing, at 59 percent, followed closely by security with 56 percent. With the threat of hacking ubiquitous, security is sure to remain in focus.

Context-driven testing:Software testers can no longer follow the same standard procedure on all projects. Rather, they have to follow context-driven testing approach. They have to earn an array of skills and the ability to interpret which skill to use in a given situation.

Moving to testing center of excellence model: The model deals with tool identification and the best practices for strengthening efficacy of tests. The testing process is transferred from development teams to a centralized testing team.

ThinkSys, one of the leading USA based software testing companies, has kept pace with the changing scenario in software testing. We have separate departments for software development and QA testing, which results in increased efficiency and cost-efficacy. Our experts keep a close watch on the changing scenario, making sure that we keep moving with the stream. Moreover, we acclimatize to the client demands by streamlining the QA structure to improve cost optimization as well as better accuracy.

Is Ad Hoc testing reliable?

By :-Simran Puri

What is Ad Hoc Testing?

Performing random testing without any plan is known as Ad Hoc Testing.  It is also referred to as Random Testing or Monkey Testing. This type of testing doesn’t follow any designed pattern or plan for the activity. The testing steps and the scenarios totally depend upon the tester, and defects are found by random checking.

Ad Hoc Testing does have its own benefits:

  • A totally informal approach, it provides an opportunity for discovery, allowing the tester to find missing cases and scenarios that might not be included in the test plan(if a test plan exists).
  • The tester can really immerse him / herself in the role of the end-user, performing tests absent of any boundaries or preconceived ideas.
  • The approach can be implemented easily, without any documents or planning.

That said, while Ad Hoc Testing is certainly useful, a tester shouldn’t rely on it solely. For a project following scrum methodology, for example, a tester focused only on the requirements and who performs Ad Hoc testing for rest of the modules of the project(apart from the requirements) will likely ignore some important areas and miss testing other very important scenarios.
When utilizing an Ad Hoc Testing methodology, a tester may attempt to cover all the scenarios and areas but will likely still end up missing a number of them. There is always a risk that the tester performs the same or similar tests multiple times while other important functionality is broken and ends up not being tested at all. This is because Ad Hoc Testing does not require all the major risk areas to been covered.

Performing Testing on the Basis of Test Plan

Test cases serve as a guide for the testers. The testing steps, areas and scenarios are defined, and the tester is supposed to follow the outlined approach to perform testing. If the test plan is efficient, it covers most of the major functionality and scenarios and there is a  low risk of missing critical bugs.
On the other hand, a test plan can limit the tester’s boundaries. There is less of an opportunity to find bugs that exist outside of the defined scenarios. Or perhaps time constraints limit the tester’s ability to execute the complete test suite.
So, while Ad Hoc Testing is not sufficient on its own, combining the Ad Hoc approach with a solid test plan will strengthen the results. By performing the test per the test plan while at the same time devoting resource to Ad Hoc testing, a test team will gain better coverage and lower the risk of missing critical bugs. Also, the defects found through Ad Hoc testing can be included in future test plans so that those defect prone areas and scenarios can be tested in a later release.

Additionally, in the case where time constraints limit the test team’s ability to execute the complete test suite, the major functionality can still be defined and documented. The tester can then use these guidelines while testing to ensure that these major areas and functionalities have been tested. And after this is done, Ad Hoc testing can continue to be performed on these and other areas.

ThinkSys Announces Cal Hacks Sponsorship, First Major Collegiate Hackathon In The San Francisco Bay Area

Sunnyvale, CA, September 26, 2014

ThinkSys Inc, a global technology company focused on software development, e-commerce, QA, and QA automation services, is proud to announce its sponsorship of Cal Hacks, the first major collegiate hackathon in the San Francisco Bay Area.

ThinkSys is a longtime advocate of the hackathon concept,” says Leslie Sarandah, Vice President of Sales and Marketing. “Our executives have championed hackathons as a way to encourage motivated teams of developers to break away from their day-to-day responsibilities and work in teams on projects of their own design. This type of activity generates some amazing customer-centric innovations in a very short period of time.”

Alexander Kern, Director of Cal Hacks states “The hackathon attracts natural problem solvers. We expect this event to bring together some of the brightest students in their fields to direct their energies toward complex problems, and with their solutions ultimately being judged by leaders in the technology industry. We are really excited to see the results.”

Cal Hacks will take place October 3 – 5 at U. C. Berkeley’s Cal Memorial Stadium. The event will bring together hundreds of undergraduate innovators, coders, and hackers from around the world to create incredible software and hardware projects. This collaborative experience offers invaluable connections, mentorship and teambuilding that will benefit participants today and in the future. The event will last 36 hours and is free to accepted participants.

About ThinkSys Inc
ThinkSys, a global technology products & services company, helps customers improve and grow their business and e-commerce initiatives across the Web and mobile channels. Employing over 120 technology specialists, ThinkSys develops, tests and implements high-quality, cost-effective solutions running in the cloud or on premise. As a leader in web and mobile manual and test automation and monitoring solutions, using its Krypton framework or other Industry tools, ThinkSys enables developers, QA Professionals and management to help reduce time to market. ThinkSys is privately held and is headquartered in Sunnyvale, CA. For more information visit ThinkSys.com

About Cal Hacks
Cal Hacks is the first major collegiate hackathon to take place in the San Francisco Bay Area. Additional sponsors of the event include Microsoft, Google, Dropbox and Facebook. For more information and a complete list of sponsors, go to calhacks.io

4th Annual Selenium Conference 2014

By– Rajnikant Jha

The 4th annual meeting of the Selenium Conference was held for the first time in India (Bangalore)September – 6, 2014. With 400 attendees from across the globe, around 20 speakers and the Selenium Committee members,this event delivered tremendous expertise and information. I would highly recommend this to anyone who is working in Selenium or related technologies.

Selenium is most popular software testing framework for web applications today. The increasing trend of Selenium users shows the popularity of Selenium. Job trend in Industry shows rise in job related to Selenium automation in last 3-4 years. Graph below shows Job trend compared to QTP as percentage of related jobs

 

Starting with the Welcome Address given by Simon Stewart, the creator of the WebDriver open source web application testing tool and a core Selenium 2 developer, this conference was rich in information, valuable tips and smart people who had multiple years of experience in Selenium. The event’s Lightening Talks track gave the attendees an opportunity to discuss issues and questions with each other as well as the Selenium Committee members. Some of the key items worth noting are as follows:

  • Selenium 3.0 which was to be released last year in Dec 2013 will be released in a couple of months.
  • With the Selenium 3.0 launch, Selenium RC has been officially deprecated, so the companies using RC should switch to WebDriver.
  • Selenium 4.0 is also on schedule and will probably be released by year-end. It will standardize to W3Ctoo.
  • Selenium IDE is being superseded by Selenium Builder.
  • Selenium Grid will have a video recording functionality enhancement.

My questions mainly related to the stability of WebDriver will be addressed in the next releases of Selenium.

I also received good response to some of the work-arounds that we are doing at ThinkSys to improve script stability and to handle some browser quirks. For example:

  • Moving mouse to origin before the test script starts.
  • IE browser setting to run automation
  • Using Firefox profile with Selenium
  • Handling Chrome Crashes

There was so much good information at the conference that will help the community to design, maintain and execute test scripts using Selenium. Some of my favorite talks and presentations included:

1)      Perils of Page Object Pattern – by Anand Bagmar

A Page Object Pattern models pages within the test code which reduces duplicity in code and provides better maintainability with one place change. Most of the WebDriver scripts and frameworks use Page Object Pattern. The talk explained Page Object Pattern through the example of Amazon application and code samples. From the discussion and code samples we understood following limitations of Page Object Pattern:

  • Test intent gets polluted
  • Duplication of implementation
  • Maintenance challenges
  • Scaling challenges

Does that mean we should not use Page Object Pattern? We should always use this for the benefits it has but it should be created for the business layer of application. The ideal Test Automation pyramid should have two different types of test automation – one for technology tests which include unit tests and integration tests, and another for business tests which include UI, functional and regression tests. This pyramid helps to understand the test intent in business terminology. The test intent is most important in Page Object Pattern.

Business Layer Page Object Pattern with test intent of business layer automation helps to design correct pattern and it has following advantages:

  • Effective automation scripts for business requirements
  • Abstraction layer allows separation of concerns
  • Maintenance and scaling becomes easier

2)      Scaling and Managing Selenium Grid– by Dima Kovalenko

There are three main topics:

  • Stability
  • Speed
  • Coverage

The presentation talked about all three topics and covered the main concerns that Selenium users face when using Selenium Grid. The presentation offered a number of suggestions that can be used to stabilize execution using Hub and nodes. The key points regarding stability were:

  • Move as much to Linux as possible
  • Run one test case at a time
  • Use better mechanisms like Crons for OS and Nodes configs
  • Use WebDriver instead of RC
  • Create schedule tasks to periodically restart Grid node and computer
  • Use batch file for IE which clean up cookies and cache and launches Internet Explorer

For speed optimization, use of smaller nodes with single browsers is recommended. For cost effectiveness, the use of low-end machines should be considered.

For testing coverage purposes, we may have to use browsers like IE7 and IE8, but these browsers take more maintenance time. Browsers like IE9 and beyond are more stable and recommendable. If we have to use Safari, it is better to use one Safari browser per machine. There are some other features that we may consider for stability of execution on nodes:

  • Automatically Set IE Protected Security Zone each reboot
  • Kill web browsers after test
  • Automatically update drivers and jars

Overall, I found the conference educational and motivating. It was great to be surrounded by other technical people in my industry all sharing their knowledge about this technology. I hope to be there at the next conference and encourage others in the Selenium space to join me!

Creating Effective Bugs

By – Shraddha Pande
A skilled QA tester knows that the most important part of the role is perhaps the ability to create effective bugs. A bug is not useful to the testing process if it is not reproducible and properly documented. Developers rely on clear and understandable bug reports to pinpoint what needs to be fixed.Thus, it is critical that these reports and the identified bugs capture all of the necessary data and criteria.

An effective bug must have these qualities:

    • Easily Reproducible:

The basic feature of a bug report is that it must be easily reproduced. For this it should have these:

  1. Title: The bug title should be a one-line accurate description.
  2. Steps: The steps to reproduce the bug must be few, clear and relevant.
  3. Summary: The actual and expected results must be descriptive enough so that the developer has a clear understanding of the problem. The expected results must describe precisely what needs to be fixed.
  4. Additional help: Whenever possible, attach a screenshot or video of the bug to the bug report to give the developers a more complete picture of the bug scenario.
  5. Platforms affected: Check the bug in all possible environments. For example, in website testing, one would run the scenario with different operating systems, browsers and mobile devices (versions and platforms) to reproduce the bug in different environments.
  • Severity and Priority:

The bug found should be labeled with the Severity (Critical, Major, Normal, Minor, Trivial and Enhancement) it can cause to the application as well as the Priority (High, Medium or Low) in which it has to be fixed.

  • Not a Duplicate:

The bug should be checked with the other bugs tracked to avoid duplication.

  • Deferrable or Not Deferrable:

Internal testers should also check the bug to ascertain if it can be fixed in the next build release.
After these steps are completed, the QA engineer checks all of the above features, discusses the bug found with the testing lead and development team and then, finally, creates the bug.

Conclusion

While the overall process outlined here is the basis of effective bug production, never underestimate the importance of good communication skills in the successful documentation and verbal explanations of the issues. A knowledgeable and respectful dialogue between QA and development leads to greater understanding of the issues and a stronger end product.
Visit us at www.thinksys.com , drop us an email at – [email protected] for connecting with us.

Is Manual Testing Still very critical?

By – Shraddha Pande

The businesses imperative to drive value into the market at a faster and faster pace often translates into shorter development and delivery cycle times. And even with a top-notch development team, only a well-planned and systematic test plan will ensure that your products function as expected across Web and mobile channels when introduced to the marketplace.

A primer on manual testing of software

When selecting a particular testing approach, don’t ignore the most basic testing method of Manual Testing. Sometimes considered elementary, this testing technique is also the oldest and most stringent form of testing software products. It is done by a test engineer who behaves and works on the product or application as an end-user, executing the test cases manually without tool support. The test engineer verifies all the features of the application or product to ensure that the behavior of the application is correct and in accordance with the client requirements.

Adhering to the Software Testing Life Cycle, the engineer will create and follow a Test Plan to ensure the comprehensiveness of testing, while executing the test cases manually without using automation testing tools. The test engineer creates Test Cases to test the application via a certain set of steps that have defined Expected Results. These results are then checked against Actual Results. After executing these test cases manually, each functional test case is either marked as passed (with zero defects) or failed (having some defects). A benefit of manual testing is that all the test cases are executed manually executed by the testers, meaning that the program is less susceptible to machine faults.

Recommended Manual Test Process 

  1. Requirement Analysis: Determine and document the needs and requirements of the client, product, and application. Determine the needs and responsibilities for the testing process.
  2. Test Plan Creation: Build the Test Plan for the product/application on the basis of requirements developed in Step I. The Test Plan should include: Objective, Scope, Focus Areas, Time Estimation, Resources and Responsibilities.
  3. Test Case Creation: Create detailed Test Cases including Test Scenario.
  4. Test Case Execution: Execute the Test Cases to verify the actual and expected results.
  5. Defect Logging: Identified defects should be logged and tracked based on the conditions. We will discuss this more in our upcoming blog.“Creating Effective Bugs”.
  6. Defect Fix & Re-verification: After fixing any known defects, it is critical to re-verify and process them accordingly.

Recognizing the Value of Manual Testing in Today’s World

  • It delivers better usability testing than automated testing.
  • Greater assurance that the product or application is free from machine defects.
  • Delivers detailed program analysis.
  • This does a superior job at identifying non-testable requirements.
  • This type of can also provide better understanding of functionality.
  • Does a better job covering Test cases and Test Scenarios.
  • Manual testing scripts provide useful feedback to development teams and can form the basis for help or tutorial files for the application under test.
  • It can cover certain security aspects that automation tools are not designed to address.
  • It can lead to discovery of more complex vulnerabilities due to its flexibility. Humans can run a creative combination of attacks to discover any vulnerability out of reach of the automation test tools.
  • Automation testing benefits from building on the work already accomplished in the manual testing process.

Conclusion:

In sum, it is important that QA teams recognize that Manual Testing can deliver critical results in the testing process. Manual Testing generally has lower up-front costs and allows a team to exercise flexibility during the testing process. Manual Testing can also be combined with an automated approach to deliver very positive and powerful results.

Keep looking at our blog section for more on this topic and click the link to know more about manual testing tools.

Visit us at www.thinksys.com , drop us an email at – [email protected] for connecting with us.

Is Complete Test Automation Always Good?

With the advancements in technology and effective service delivery for businesses, QA and Testing service providers are venturing into automation. Automation may take the lead in accuracy but what about the flexibility, out of the box solution that a human resource can offer. Automation may be useful for tasks that are repetitive and do not require much human interaction thus focusing on the area it is designed for. But what the other related area and the bugs surrounding it that it may missed out.

 As part of cost-effectiveness strategy, companies may think that all tests should be automated as it’s a one-time effort and provides cheaper solution rather than spend on many resources working manually. How true this thought is? Creating and maintaining automation tests do require resources along with the cost of the tools, hence the above thought may be true for some cases but not always.

Test automation is a great idea for projects where product is developed and needs to be strengthened. It may not be the right thing to do with new product testing. For a new product under test, a careful combination of manual and automated tests should be used so that testing activity does not overlook bugs that may be unforeseen by the automation. It may turn out to be costly in case bugs get overlooked by automation and are detected at a later stage. To make automation more effective, test cases for all the error scenarios have to be designed separately and this may be additional costly effort.
Automation of tests has its own side-effects as lots of bug may be missed in automation thus impacting the quality of the released product which may lower the customer satisfaction if they come up with those bugs.
In the end we can conclude that a careful combination of manual and automation testing may be a good approach as they alone may not prove as effective as it should be.

ThinkSys Strengthens Management Team With Addition Of Former CIO Of LG India

Sunnyvale, CA, May 19, 2014

Daya Prakash, as Head of its Indian Operations
ThinkSys adds Daya Prakash, a recognized and an award winning CIO to its Management in India bringing seasoned expertise to its customers and employees.

ThinkSys Inc., a global technology services company focused on software development, e-commerce, QA, and QA automation services, today announced the expansion of its Management Team with the addition of Daya Prakash, as Head of Indian Operations in its Noida Office.

Mr. Prakash has over 20 years of experience in Business and Technology and was most recently the CIO of LG Electronics India (KRX: 066570, LSE: LGLD). At LG, he led all of LG India’s technology operations. Managing a vast team responsible for implementing leading-edge technologies and business process innovation, Mr. Prakash played a key role in helping LG India grow its business from a few million USD to over 3.6 Billion USD in a span of a decade. In recognition of his leadership in the industry, Mr. Prakash has been honored with many prestigious awards including Global CIO by UB Media, CIO Super Achiever Award by IDG, CIO 100 by IDG 2007-2011, CTO of Year (Mfg) by Dun & Bradstreet, Top 100 CISO Award by Info Security and finishing in the top two for India’s most respected CIO in 2012.

“Mr. Prakash’s experience and expertise adds strength and depth to our management team and will be a major factor in successfully executing our strategy to drive future growth. I am excited to have such a seasoned expertise at hand for the benefit of our customers and employees.” said Rajiv Jain, CEO, ThinkSys.

“I am very excited and at the same time feel privileged to be a part of ThinkSys family. ThinkSys has been providing first-class services to its worldwide customer base for many years now. The company has demonstrated that it is capable of playing vital role in the mission-critical initiatives of its clients, and I look forward to ThinkSys growing from a simple service provider to a customer’s most trusted partner.” said Mr. Prakash.

An active member in CIO community, Mr. Prakash’s thoughts and articles have been published in leading magazines and national dailies including Data Quest, Network Computing, CIO, Economic Times, Financial Express and Computer Express. And he continues to be an active speaker in various forums and seminars conducted by national and international groups. Mr. Prakash has a Masters in Computer Management and an MBA. Always hungry for knowledge he continues his studies pursuing his PhD.

About ThinkSys Inc
ThinkSys, a global technology products & services company, helps customers improve and grow their business and e-commerce initiatives across the Web and mobile channels. Employing over 120 technology specialists, ThinkSys develops, tests and implements high-quality, cost-effective solutions running in the cloud or on premise. As a leader in web and mobile manual and test automation and monitoring solutions, using its Krypton framework or other Industry tools, ThinkSys enables developers, QA Professionals and management to help reduce time to market. ThinkSys is privately held and is headquartered in Sunnyvale, CA. For more information visit http://www.thinksys.com.

ThinkSys Creates End-To-End Customer Experience Management Solutions

IBM and Arrow Electronics to increase online conversions and revenues
Sunnyvale, California – December 2, 2013

ThinkSys, Inc., a global technology services company headquartered in Silicon Valley and an IBM Business Partner, is partnering with Arrow Electronics to help customers improve and grow their e-commerce business across multiple online and mobile channels.

“We have midmarket and enterprise customers around the globe,” said Rajiv Jain, CEO of ThinkSys. “They want to know how they can gain critical insight into how customers are experiencing with their online and mobile channels. I tell them they need solutions that provide immediate visibility into the customer experience, prioritize issues affecting online conversion and customer retention rates, and speed problem resolution. They need IBM Tealeaf.”

Arrow Electronics is ThinkSys’ distribution partner for IBM solutions. Arrow is a $14 billion technology company and one of IBM’s largest solution distributors.

“Arrow is proud to be partnering with the talented team at ThinkSys,” says Shannon McWilliams, senior director of IBM software sales for Arrow Electronics. “We look forward to working with them as they optimize the digital and traditional marketing channels that drive business success.”

Major companies around the globe rely on IBM solutions to increase enterprise efficiency, workforce productivity, and infrastructure flexibility. ThinkSys plans to focus resources on the IBM smarter commerce initiative, with a special emphasis on implementing and supporting the IBM Tealeaf portfolio of products.

“ThinkSys has helped numerous companies improve their online presence and deliver the goods and services their users expect,” said Leslie Givens Sarandah, vice president of marketing and sales for ThinkSys. “Our relationship with IBM and Arrow complement our technical resources. Our leadership strongly believes in this direction. It will help us deliver customer-centric mobile solutions and e-commerce success today and going forward.”

About ThinkSys Inc
ThinkSys, Inc. (www.thinksys.com) is a global technology services company that helps customers improve and grow their e-commerce business across web and mobile channels. Employing more than 120 technology specialists, ThinkSys develops, tests and implements effective, affordable solutions using cloud-based or on-premise technologies. Deloitte Technology has designated ThinkSys as a Fast 50 company.

ThinkSys Open Positions
Arrow Electronics (www.arrow.com) is a global provider of products, services and solutions to industrial and commercial users of electronic components and enterprise computing solutions. Arrow serves as a supply channel partner for more than 100,000 original equipment manufacturers, contract manufacturers and commercial customers through a global network of more than 470 locations in 55 countries.

Selecting Platform for Your Mobile Apps

In the early days of mobile app development, people might still remember BREW, Symbian, and Java ME, but with the advent of smartphones, the choices started to simplify: If you were targeting enterprise business users, you developed for Blackberry; if you were developing for any other user, you developed for iOS. Then Android entered the picture, and now Windows has started to show its head.

At first, Android’s arrival was not a big deal, because there was only one version of their OS available on limited devices; you could still bet on iOS or Blackberry and win. Blackberry is no longer even a consideration, however, when developing mobile apps. In fact, between September 2011 and August 2012, Blackberry usage in the United States dropped 25 percent, and the mobile platform now boasts only about 1 percent share of the market. Blackberry is dying fast.

The chart below shows the top 10 platforms that are in the minds of the developers world-wide.

The trend of top platforms that developers are choosing correlate nicely with the number of handsets being sold worldwide. The numbers below show the increase of market share for Android, iOS and Windows as well as the significant decline in BlackBerry and Symbian sales.

In the end, developers are working on platforms that have the farthest reach. It is clear that the year over year continuous drop in Java ME, Blackberry and Symbian platforms are making these platforms less relevant in the smartphone market.

However, an interesting number in Fig. 1 is the Mobile Web – HTML 5 platform, which increased 56% in 2011 and continues to remain stable for this year. Despite the fact that mobile browsers continue to get more fragmented, and the Mobile Web wrestles with performance issues and lack of functional richness, the cross-platform nature of the Mobile Web platform continues to attract developers. We are seeing continued interest with our customers in this space and feel that this platform will remain important in the near future.

When looking at future trends for platform choices and interviewing over a thousand developers, the Developer Economics 2012 survey (see Fig. 3) shows a significant increase in the choice for the Windows Platform.

 

As Nokia and Microsoft continue their aggressive marketing for the adoption of Microsoft’s new Windows platform, the sales of Windows phones continue to have mixed results. Nokia’s Lumia 900 is getting good reviews from the press, and the base functionality in most cases is on par with the top-of–the-line iOS and Android phones. Even though we continue to believe that the developers will develop on the platforms that have the farthest reach, at present, Windows is one of the top platforms in the minds of the developers. The main reason for this, we believe, is the ability developers have to develop for the Windows 8 Metro UI, which offers an easy port to the Windows platform. This will get better as Microsoft merges the APIs for mobile and OS development. As enterprise applications continue to increase in both development and usage, this will continue to be an important development platform.

Having said that, Microsoft and Nokia have a short window to start showing an increase market share and customer reach from the measly 2%.

At the same time that Blackberry was gasping its last breath, open source technology was leading to a rapid fragmentation of Android, whose usage exploded both in the U.S. and globally. By mid-2012 Android devices were selling four times faster than Apple. Together, Apple and Android account for 85 percent of the mobile market. While the easy choice as to what platform you should develop your mobile app for is Apple, Google’s Android and all of its associated sweet-treat operating systems account for 50.1 percent of the market.

With Blackberry out of the picture as the dedicated business phone (unless you work for the government), companies no longer have the luxury of choosing a single platform if they want to be competitive, visible and relevant in mobile world. And more than ever, people are relying on their mobile devices to access the Internet.

Android and Apple each take very different approaches to their operating system updates. Apple is streamlined; they introduce a new iOS to coincide with the release of a new device, and they make previous versions obsolete, forcing everyone, for the most part, to adopt the same platform. (Even users are forced to comply: every time a user logs into iTunes, they have to update to the latest iOS).

Android, on the other hand, presents a garbled mess of new and old platforms and no standardization for device screen sizes. There are 11 OSs currently circulating for Android, with a 12th, Jelly Bean, just hitting the market. Yet with more than half of all mobile device users devoted to Android in one form or another, you can’t afford not to develop your app to be compatible with the Android OS platform(s).

Rapidly changing technology makes the cost of retaining the talent necessary to develop mobile apps for multiple platforms difficult. Because developing for the different mobile platforms requires extensively different knowledge – they use different languages, different protocols, different development strategies – it is very difficult for a company to maintain its own development team that is capable of writing mobile apps for multiple platforms. The smart answer to the question “What platform should I develop my mobile app for?” is the hardest answer to give: all of them. Developers can make it simple. If performance and local platform functionality is less important, you might want to choose the Mobile Web platform (HTML 5).