Web Application Functional Regression Testing Using Selenium

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

At Foliotek, we use a rapid development methodology.  Typically, a new item will go from definition through coding to release in a month’s time (bucketed along with other new items for the month).  A bugfix will nearly always be released within a week of the time it was reported.  In fact, we are currently experimenting with a methodology that will allow us to test and deploy new items individually as well – which means that a new (small) item can go from definition to release in as little as a week, too.

Overall, this kind of workflow is great for us, and great for our customers.  We don’t need to wait a year to change something to make our product more compelling, and customers don’t have to wait a year to get something they want implemented.  We also avoid the shock of suddenly introducing a year’s worth of development to all our customers all at once – a handful of minor changes every month (or week) is much easier to cope with.

However, it also means that Foliotek is never exactly the same as it was the week before.  Every time something changes, there is some risk that something breaks.   We handle this risk in two ways:

  1. We test extremely thoroughly
  2. We fix any problems that arise within about a week (severe problems usually the same day)

At first, we did all testing manually.  This is the best way to test, assuming you have enough good testers with enough time to do it well.  Good testers can’t be just anyone – they have to have a thorough knowledge of how the system should work, they have to care that it does work perfectly, and they have to have a feel for how they might try to break things.  Having enough people like this with enough time to do testing is expensive.

Over time two related things happened.  One was that we added more developers to the project, and started building more faster.  Two was that the system was growing bigger and more complex.

As more people developed on it and the system grew more complex, our testing needs grew exponentially.  The rise in complexity and people developing led to much, much more potential for side-effects – problems where one change affects a different (but subtly related) subsystem.  Side-effects by their nature are impossible to predict.  The only way to catch them was to test EVERYTHING any time ANYTHING changed.

We didn’t have enough experienced testers to do that every month (new development release) let alone every week (bugfix release).

To deal with that, we started by writing a manual regression test script to run through each week.  While this didn’t free up any time overall – it did mean that once the test was written well, anyone could execute it.  This was doable, because we had interns who had to be around to help handle support calls anyways – and they were only intermittently busy.  In their free time they could execute the tests.

Another route we could have gone would have been to write automated unit tests (http://en.wikipedia.org/wiki/Unit_testing).  Basically, these are tiny contracts the developers would write that say something like “calling the Add function on the User class with name Luke will result in the User database table having a new row with name Luke”.  Each time the project is built, the contracts are verified.  This is great for projects like code libraries and APIs where the product of the project IS the result of each function.  For a web application, though, the product is the complex interaction of functions and how they produce an on screen behavior.  There are lots of ways that the individual functions could all be correct and the behavior still fails.  It is also very difficult to impossible to test client-side parts of a web application – javascript, AJAX, CSS, etc.  Unit testing would cost a non trivial amount (building and maintaining the tests) for a trivial gain.

Eventually, we discovered the Selenium project (http://seleniumhq.org/download/).  The idea of Selenium is basically to take our manual regression test scripts, and create them such that a computer can automatically run the tests in a browser (pretty much) just like a human tester would.  This allows us to greatly expand our regression test coverage, and run it for every single change we make and release.

Here are the Selenium tools we use and what we use them for:

  • Selenium IDE (http://release.seleniumhq.org/selenium-ide/) : A Firefox plugin that lets you quickly create tests using a ‘record’ function that builds it out of your clicks, lets you manually edit to make your tests more complex, and runs them in Firefox.
  • Selenium RC (http://selenium.googlecode.com/files/selenium-remote-control-1.0.3.zip):  A java application that will take the tests you create with Selenium IDE, and run them in multiple browsers (firefox, ie, chrome, etc).  It runs from the command line, so its fairly easy to automate test runs into build actions/etc as well.
  • Sauce RC (http://saucelabs.com/downloads): A fork of RC that adds a web ui on top of the command line interface.  It’s useful for quickly debugging tests that don’t execute properly in non-firefox browsers.  It also integrates with SauceLabs – a service that lets you run your tests in the cloud on multiple operating systems and browsers (for a fee).
  • BrowserMob (http://browsermob.com/performance-testing): An online service that will take your selenium scripts and use them to generate real user traffic on your site.  Essentially, it spawns off as many real machines and instances of FireFox at once to run your test – each just as you would do locally – for a fee.  It costs less than $10 to test up to 25 “real browser users” – which actually can map to many more users than that since the automated test doesn’t have to think between clicks.  It gets expensive quickly to test more users than that.

Selenium is a huge boon for us.  We took the manual tests that would occupy a tester for as much as a day, and made it possible to run those same tests with minimal interaction in a half hour or less.  We’ll be able to cover more test cases, and run it more – even running them as development occurs to catch issues earlier.

In my next post, I’ll talk about the details of how you build tests, run them, maintain them, etc. with the tools mentioned above.

bindWithDelay jQuery Plugin

Sometimes, I want to have a JavaScript event that doesn’t fire until the native event stops firing for a short timeout. I’ve needed to use that pattern in almost every project I have worked on.

For example, you want to use JavaScript to resize an iframe to 100% height when the window resizes. The resize() event can fire dozens of times, and calculating and setting the new height can slow down your page. I used to implement it like this:

var timeout;
function doResize(e) {
   clearTimeout(timeout);
   timeout = setTimeout(function() {
      // run some code
   }, 200);
}
$(function() {
   $(window).bind("resize",doResize);
});

Notice that there are extra variables that you have to deal with, and extra indentation. You could at least clean up the global variable using closures, but you get the idea.

I wrote a plugin to make this pattern easier, it is called “bindWithDelay”. The source code is online, as is a mini project page with a demo.

This is what the same code looks like with the plugin:

function doResize(e) {
      // run some code
}
$(function() {
   $(window).bindWithDelay("resize", doResize, 200);
})

Handy ASP.NET Debug Extension Method

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Most of the programmers I know (myself included) don’t bother with the built in Visual Studio debugging tools. They are slow and resource intensive. Usually, its more efficient to just do one or more Response.Write calls to see key data at key steps.

That can be a hassle, though. Most objects don’t print very well. You have to create a loop or write some LINQ/String.Join to write items in a collection.

Inspiration struck – couldn’t I write an extension method on object to write out a reasonable representation of pretty much anything? I could write out html tables for lists with columns for properties, etc.

Then I thought – I love the javascript debug console in firebug. I can drill down into individual items without being overwhelmed by all of the data at once. Why not have my debug information spit out javascript to write to the debug console? That also keeps it out of the way of the rest of the interface.

Here’s the code:

public static void Debug(this object value)
        {
            if (HttpContext.Current != null)
            {
                HttpContext.Current.Response.Debug(value);
            }

        }

        public static void Debug(this HttpResponse Response, params object[] args)
        {

            new HttpResponseWrapper(Response).Debug(args);
        }
        public static void Debug(this HttpResponseBase Response, params object[] args)
        {

            ((HttpResponseWrapper)Response).Debug(args);
        }
        public static void Debug(this HttpResponseWrapper Response, params object[] args)
        {

            if (Response != null && Response.ContentType == "text/html")
            {
                Response.Write("<script type='text/javascript'>");
                Response.Write("if(console&&console.debug){");

                Response.Write("console.debug(" +
                              args.SerializeToJSON() +
                               ");");
                Response.Write("}");
                Response.Write("</script>");
            }
        }

The various overloads allow:

myObject.Debug();
new {message="test",obj=myObject}.Debug();
Response.Debug("some message",myObject,myObject2);
//etc

The only other thing you’ll need is the awesome JSON.NET library for the .SerializeToJSON() call to work (which turns the .NET object into the form javascript can deal with). Get it here. FYI, the library does choke serializing some complex objects, so occasionally you’ll need to simplify before calling debug.

Make Table Rows Sortable Using jQuery UI Sortable

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

So you want to make table rows sortable using jQuery UI? Luckily, the Sortable interaction does most of the work for you.

But there’s a catch: one problem that I ran into when implementing this (with UI version 1.7) was the cell widths of the row would collapse once I started dragging it.

Suppose you have a table of data, like this one:

<table id="sort" class="grid" title="Kurt Vonnegut novels">
	<thead>
		<tr><th>Year</th><th>Title</th><th>Grade</th></tr>
	</thead>
	<tbody>
		<tr><td>1969</td><td>Slaughterhouse-Five</td><td>A+</td></tr>
		<tr><td>1952</td><td>Player Piano</td><td>B</td></tr>
		<tr><td>1963</td><td>Cat's Cradle</td><td>A+</td></tr>
		<tr><td>1973</td><td>Breakfast of Champions</td><td>C</td></tr>
		<tr><td>1965</td><td>God Bless You, Mr. Rosewater</td><td>A</td></tr>
	</tbody>
</table>

Your first attempt to make it sortable might look like this:

$("#sort tbody").sortable().disableSelection();

And it actually works, but there is a bit of a problem. The cell widths seem to be collapsing once you start dragging a row (notice how close the “C” cell is to the “Breakfast of Champions” cell). It looks like this:

Sortable row collapsed widths

The problem has to do with the helper object. The helper object is basically the DOM element that follows the cursor during the drag event. When it is created by default, the cells collapse to the size of the content inside of them.

You can specify a function that returns a jQuery object to create a custom helper object. By creating a function that will keep the cell widths consistent, this problem can be fixed.

// Return a helper with preserved width of cells
var fixHelper = function(e, ui) {
	ui.children().each(function() {
		$(this).width($(this).width());
	});
	return ui;
};

$("#sort tbody").sortable({
	helper: fixHelper
}).disableSelection();

Now it works as expected:
Sortable row fixed

Accessible Custom AJAX and .NET

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

One general rule in making an accessible web application is that you shouldn’t change content of the page with javascript. This is because screen readers have a tough time monitoring the page and notifying the user of dynamic changes. Usually, the reason you would use Ajax is exactly that – do something on the server, and then update some small part of the page without causing the whole page to refresh. That means if you have an “ajaxy” page and you care about accessibility, you have to provide a non-ajax interface as a fallback for screen readers.

Ajax.NET, Microsoft’s library for Ajax, makes this fallback easy to implement. Microsoft has you define your AJAX properties in an abstraction layer – the page XAML – which means that the framework can decide to not render the AJAX code to certain browsers (and screen readers) that will not support it, and instead use the standard postback method.

The problem with Ajax.NET is that the communication can be bloated (mostly because of the abstraction layer, it sends more post values than you might need – like the encrypted viewstate value), which negates many of the benefits of an Ajax solution. I really wanted to roll my own ajax communications to make them as lightweight as possible.

My solution was to write the page in a standard .NET postback manner, and then use a user-defined property that would allow javascript to replace the postback in javascript/jQuery with an Ajax call.

Here’s my code:

$(function() {
            if (serverVars.uiVersion != "accessible") { // use js/ajax for checking/unchecking where possible
                var $todos = $("#chkToDo");
               $todos.removeAttr("onclick"); // remove postback
                $todos.click(
                    function() {
                        //some stuff altering the document, and an ajax call to report the change to the db
                    }
                );
            }

This works great, although you need to be careful about your server-side events. In my case, I had an OnCheckChanged event to handle the postback/accessible mode. Even though checking/unchecking the box no longer fired an autopostback – ASP.NET will still fire the checkchanged event if you postback later for some other reason (e.g. – a linkbutton elsewhere on the page) after the checked status had changed. So, if a user had changed the state of a checkbox, then clicked another link button on the page – instead of sending the user to the intended page,my app just refreshed the whole page(because my CheckChanged event redirected to reload the page – which caused it to skip the ‘click’ event of the linkbutton). Once I realized this was happening, it was easy enough to fix – I just needed to only run the event logic if the user was in accessibility mode. I spent a little time running in circles on that one though, at first I thought my client side document changes were causing a ViewState validation error on the server.

Extending jQuery to Select ASP Controls

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

If you have worked with JavaScript in an ASP.NET Web Forms environment, you almost certainly have been frustrated that this markup:

	<asp:TextBox runat="server" ID="txtPhoneNumber" />

renders out as something like:

	<input type="text" id="ctl00_ctl00_ctl00_main_Content_txtPhoneNumber"
		name="ctl00$ctl00$ctl00$main$Content$txtPhoneNumber" />

The fastest and easiest way to get a reference to a DOM element in JavaScript is using the ID attribute and document.getElementById(). Unfortunately, the ID attribute generated by the server is unpredictable and based on the server naming containers. There are a couple ways of that I have previously dealt with that problem, but both have problems.

Old Solutions

  1. Accessing the ClientID of the server control

    If you use inline code inside <% %> tags, you can access the ClientID directly from the .aspx page.

    	var goodID = '<% txtPhoneNumber.ClientID %>';  // = 'ctl00_ctl00_ctl00_main_HeaderTabs_txtPhoneNumber'
    	var badID = 'txtPhoneNumber'; // The text box does not have this ID, it will not work
    	var uglyID = 'ctl00_ctl00_ctl00_main_Content_txtPhoneNumber'; // DO NOT hardcode the generated ID into your code!
    

    This is not ideal because you cannot reference the ClientID from outside of the page, so you cannot keep your JavaScript in external files.

  2. Setting attributes on control and accessing with jQuery selectors

    jQuery has an excellent selector API that can easily grab an element if you know attributes on it. So, if there were a couple controls defined as:

    	<asp:TextBox runat="server" ID="txtPhoneNumber" CssClass="txtPhoneNumber" />
    	<asp:TextBox runat="server" ID="txtAddress" ClientID="txtAddress" />
    

    You could access them with jQuery:

    	$(".txtPhoneNumber").keyup(...);   // This works
    	$("[ClientID='txtAddress']").keyup(...);   // This works
    
    	$("#txtPhoneNumber").keyup(...);   // This still DOESN'T work
    	$("#txtAddress").keyup(...);   // This still DOESN'T work
    

    This is not ideal because it requires adding extra attributes onto any server control that you want to access with JavaScript.

Original jQuery Solution

I first happened upon a solution to the same problem over on John Sheehan’s blog. This looked promising, but did not work the current latest release of jQuery (1.3). Another contributer to this blog, Tim Banks, updated the code in to work with the newer version.

However, I found an error in Internet Explorer and a more reliable way to get the ClientID would be to use the jQuery attribute selector and match based on an “id” attribute that ends with the Server ID. So, I wrote a JavaScript function called $asp that returned a jQuery collection. This function worked well and was implemented in the latest Foliotek release.

	function $asp(serverID) {
		return $("[id$='" + serverID+ "']");
	}

	// Once this function is included, you can call it and get back a jQuery collection
	$asp("txtPhoneNumber").keyup(...);

Updated jQuery and Sizzle Solution

It seemed like it would be better if the solution actually extended the selector engine rather than using a function to select objects. This would fit better into a jQuery development paradigm, allow more complex selectors, and also allow the selector to be used in other library functions, like “filter” or “find”. Also, it would be nice to be able to use the tag name in the selector to give a performance improvement, since getElementsByTagName is a fast operation that will narrow the element collection.

So, I returned to the selector, fixed the IE bug and made sure it worked with the now latest version of jQuery (1.3.2). This short function extends the Sizzle selector engine to return an element that has an ID that ends with the ID that is passed in the parenthesis after the “:asp()” selector.

	// Include this function before you use any selectors that rely on it
	jQuery.expr[':'].asp = function(elem, i, match) {
		return (elem.id && elem.id.match(match[3] + "$"));
	};

	// Now all of these are valid selectors
	// They show why this method has more functionality than the previous $asp() function.
	$(":asp(txtPhoneNumber)")
	$("input:asp(txtPhoneNumber):visible")
	$(":asp(txtPhoneNumber), :asp(txtAddress)")
	$("ul:asp(listTodos) li")
	$("#content").find("ul:asp(listTodos)")

This function allows access to server controls without adding additional markup and without the JavaScript existing on the .aspx page.


Note, there is a potential limitation if you had one control with the ID=”txtPhoneNumber” and another with ID=”mytxtPhoneNumber”. Both elements end with “txtPhoneNumber” and the selector would not necessarily return the correct value. This solution is not perfect in that sense, but the benefits it provides over other methods (cleaner markup and ability to use external JavaScript files) make it a good alternative.

Finding real body height using jQuery

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Sometimes, you need to know the full content of a document’s height. In our case, we needed to set an iframe to be the full height of its content.  There are several ways to attempt this:

document.body.clientHeight;
document.body.offsetHeight;
document.body.scrollHeight;

I’ve found that all of these require significant tweaking to get accurate results across all browsers. Each property means something different in each browser. That sucks for coding and maintenance. Enter jQuery:

$("body").height();

Unfortunately, even that doesn’t seem to work very well in IE.

Here’s a workaround that should always work, though. Basically, you create a temporary div that contains everything from the body, insert it off screen, and measure it:

function getDocumentHeight()
{
   if($.browser.msie)
   {
       var $temp = $("<div>")
             .css("position","absolute")
             .css("left","-10000px")
             .append($("body").html());

       $("body").append($temp);
        var h = $temp.height();
        $temp.remove();
        return h;
    }
    return $("body").height();
}

It seems like it would be really inefficient, but it actually performs fairly well.