Running Tests in Parallel with Selenium

Posted By: Matthew Sneeden

The ability to run multiple tests in parallel is key when creating large, scalable, automated test suites.  It is even more important when we begin to enter the world of continuous integration and deployment.

Luckily, users of Selenium are able to a accomplish this, with a few modifications and potential restructuring of existing tests.  “Out of the box” Selenium is coupled with the NUnit framework.  For this example, we will be using the MbUnit framework which is included with the Gallio automation platform  for .a C# implementation of Selenium.

After installing Gallio, any references to NUnit in existing and/or new test projects must be replaced with MbUnit.  MbUnit includes an attribute known as the ‘Parallelizable’ attribute that can be applied at either the test or test fixture level.  As you may have guessed from the name, this is the attribute which designates a test, or test fixture as being capable of being run in parallel with other tests or test fixtures.

Following some of the common examples for writing Selenium tests and attempting to slip-stream in the Parallelizable attribute will not do.  As all of the tests within the class reference the same Selenium instance, once the first test finishes, the Selenium object is destroyed and all of the subsequent tests will fail.

In this example, we define a base fixture that handles all SetUp and TearDown for the test fixtures that derive from the base class.  We define a dictionary in which all of the instances of Selenium will reside; on SetUp we spin up a new Selenium instance and add it to the container storing it with a key of the current test name.  Likewise, on TearDown, we fetch the same instance and shut it down, leaving the other instances untouched

Base Fixture:
    public class FixtureBase {
        private Dictionary _seleniumContainer = new Dictionary();
        protected Dictionary SeleniumContainer {
            get { return _seleniumContainer; }

[SetUp] public void SetUp() { ISelenium selenium = new DefaultSelenium("localhost", 4444, "*firefox", "url to application under test"); _seleniumContainer.Add(Gallio.Framework.TestContext.CurrentContext.Test.Name, selenium); }
[TearDown] public void TearDown() { // Shut down the current instance of selenium if (_seleniumContainer[Gallio.Framework.TestContext.CurrentContext.Test.Name] != null) { try { // Close browser _seleniumContainer[Gallio.Framework.TestContext.CurrentContext.Test.Name].Stop(); _seleniumContainer.Remove(Gallio.Framework.TestContext.CurrentContext.Test.Name); } catch { } } }

Then, in the test class itself, we derive from our base test fixture which passes down all SetUp and TearDown functionality.  Notice the Parallelizable attribute and how it’s applied to this fixture.  You can apply the attribute individually at the test level but in the case of an entire fixture that should be run in parallel, the TestScope.All parameter designates the entire fixture (and all tests contained within) to be run in parallel.  Within each test we must now fetch the correct Selenium instance from the container using the current test name as the key.  From that point on, the desired test logic can be implemented.

Test Fixture:
public class TestClass : FixtureBase {
   public void Test() {
      // Fetch the selenium object created for this test
      ISelenium selenium = SeleniumContainer[Gallio.Framework.TestContext.CurrentContext.Test.Name];
      // Additional test logic…

Within the AssemblyInfo.cs file, there is one import piece of information as it pertains to this example.  You’ll notice an assembly attribute named ‘DegreeOfParallelism’, this value allows the user to control the number of concurrent threads allocated to running tests via Gallio.  You’ll want to evaluate, based on your current hardware, what setting works best for your individual machine.

[assembly: DegreeOfParallelism(6)]

Running your tests in Gallio is no different, just select your tests and test fixtures and run, the only difference you’ll notice is that your tests are running quicker!

Matthew Sneeden is a member of the Quality Assurance team at FT.  He takes part in a wide array of testing activities ranging from in-sprint to regression, and manual to automated.  He also takes part in feature discussions and helps identify candidate content for future standards documents to help better serve our clients.

Posted In: Tips,

No one has commented yet. Be the first!
Commenting is not available in this channel entry.


Previous Posts:

Subscribe to the RSS feed!