Web Application Functional Regression Testing Using Selenium

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

At Foliotek, we use a rapid development methodology.  Typically, a new item will go from definition through coding to release in a month’s time (bucketed along with other new items for the month).  A bugfix will nearly always be released within a week of the time it was reported.  In fact, we are currently experimenting with a methodology that will allow us to test and deploy new items individually as well – which means that a new (small) item can go from definition to release in as little as a week, too.

Overall, this kind of workflow is great for us, and great for our customers.  We don’t need to wait a year to change something to make our product more compelling, and customers don’t have to wait a year to get something they want implemented.  We also avoid the shock of suddenly introducing a year’s worth of development to all our customers all at once – a handful of minor changes every month (or week) is much easier to cope with.

However, it also means that Foliotek is never exactly the same as it was the week before.  Every time something changes, there is some risk that something breaks.   We handle this risk in two ways:

  1. We test extremely thoroughly
  2. We fix any problems that arise within about a week (severe problems usually the same day)

At first, we did all testing manually.  This is the best way to test, assuming you have enough good testers with enough time to do it well.  Good testers can’t be just anyone – they have to have a thorough knowledge of how the system should work, they have to care that it does work perfectly, and they have to have a feel for how they might try to break things.  Having enough people like this with enough time to do testing is expensive.

Over time two related things happened.  One was that we added more developers to the project, and started building more faster.  Two was that the system was growing bigger and more complex.

As more people developed on it and the system grew more complex, our testing needs grew exponentially.  The rise in complexity and people developing led to much, much more potential for side-effects – problems where one change affects a different (but subtly related) subsystem.  Side-effects by their nature are impossible to predict.  The only way to catch them was to test EVERYTHING any time ANYTHING changed.

We didn’t have enough experienced testers to do that every month (new development release) let alone every week (bugfix release).

To deal with that, we started by writing a manual regression test script to run through each week.  While this didn’t free up any time overall – it did mean that once the test was written well, anyone could execute it.  This was doable, because we had interns who had to be around to help handle support calls anyways – and they were only intermittently busy.  In their free time they could execute the tests.

Another route we could have gone would have been to write automated unit tests (http://en.wikipedia.org/wiki/Unit_testing).  Basically, these are tiny contracts the developers would write that say something like “calling the Add function on the User class with name Luke will result in the User database table having a new row with name Luke”.  Each time the project is built, the contracts are verified.  This is great for projects like code libraries and APIs where the product of the project IS the result of each function.  For a web application, though, the product is the complex interaction of functions and how they produce an on screen behavior.  There are lots of ways that the individual functions could all be correct and the behavior still fails.  It is also very difficult to impossible to test client-side parts of a web application – javascript, AJAX, CSS, etc.  Unit testing would cost a non trivial amount (building and maintaining the tests) for a trivial gain.

Eventually, we discovered the Selenium project (http://seleniumhq.org/download/).  The idea of Selenium is basically to take our manual regression test scripts, and create them such that a computer can automatically run the tests in a browser (pretty much) just like a human tester would.  This allows us to greatly expand our regression test coverage, and run it for every single change we make and release.

Here are the Selenium tools we use and what we use them for:

  • Selenium IDE (http://release.seleniumhq.org/selenium-ide/) : A Firefox plugin that lets you quickly create tests using a ‘record’ function that builds it out of your clicks, lets you manually edit to make your tests more complex, and runs them in Firefox.
  • Selenium RC (http://selenium.googlecode.com/files/selenium-remote-control-1.0.3.zip):  A java application that will take the tests you create with Selenium IDE, and run them in multiple browsers (firefox, ie, chrome, etc).  It runs from the command line, so its fairly easy to automate test runs into build actions/etc as well.
  • Sauce RC (http://saucelabs.com/downloads): A fork of RC that adds a web ui on top of the command line interface.  It’s useful for quickly debugging tests that don’t execute properly in non-firefox browsers.  It also integrates with SauceLabs – a service that lets you run your tests in the cloud on multiple operating systems and browsers (for a fee).
  • BrowserMob (http://browsermob.com/performance-testing): An online service that will take your selenium scripts and use them to generate real user traffic on your site.  Essentially, it spawns off as many real machines and instances of FireFox at once to run your test – each just as you would do locally – for a fee.  It costs less than $10 to test up to 25 “real browser users” – which actually can map to many more users than that since the automated test doesn’t have to think between clicks.  It gets expensive quickly to test more users than that.

Selenium is a huge boon for us.  We took the manual tests that would occupy a tester for as much as a day, and made it possible to run those same tests with minimal interaction in a half hour or less.  We’ll be able to cover more test cases, and run it more – even running them as development occurs to catch issues earlier.

In my next post, I’ll talk about the details of how you build tests, run them, maintain them, etc. with the tools mentioned above.

Threading with Impersonation in an ASP.NET Project

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Every once in a while, you might run into a need to do something that takes some time in a web app, but doesn’t require user interaction. Maybe you are processing an uploaded file (rescaling images, unzipping, etc). Maybe you are rewriting some statistical data based on new posts. Basically, something that takes minutes or hours – but isn’t that important to be interactive with the user.

You could set up a “job” in a database to be run the next time your timer runs (see https://lanitdev.wordpress.com/2010/03/16/running-a-scheduled-task/). If you don’t have a timer yet, though, that can be overkill if you don’t care that multiple jobs may run at once.

In my case, I needed to export a large volume of data to a zip file. I asked up front for an email address – and the user will receive a link to the completed zip in an email later. The job would only be performed by admins, and even then only about once a year – so there was no need to schedule the job – I could just fire it off when the user requested it.

Any easy way to do this is to use the .NET threading objects in System.Threading. Because I need to save a file, I also have one additional issue – Threads don’t automatically run under the same account that the site does, so I had to include code to impersonate a user that has write permissions.

Here’s a bit of code to get you started:

// param class to pass multiple values
private class ExportParams
            public int UserID { get; set; }
            public string Email { get; set; }
            public string ImpersonateUser { get; set; }
            public string ImpersonateDomain { get; set; }
            public string ImpersonatePassword { get; set; }

        protected void btnExport_Click(object sender, EventArgs e)
//  .... code to get current app user, windows user to impersonate .....

            Thread t = new Thread(new ParameterizedThreadStart(DoExport));
            t.Start(new ExportParams(){
                ImpersonateUser = username,
                ImpersonateDomain = domain,
                ImpersonatePassword = password
             // show user 'processing' message .....

        private void DoExport(object param)
            ExportParams ep = (ExportParams)param;

            using(var context = Security.Impersonate(ep.ImpersonateUser , ep.ImpersonateDomain,
             ep.ImpersonatePassword ))
            // do the work here..............

Here’s the relevant part of the Security class that does the impersonation:

using System.Runtime.InteropServices;
using System.Security.Principal;
// .....
public class Security {
public const int LOGON_TYPE_INTERACTIVE = 2;
        public const int LOGON_TYPE_PROVIDER_DEFAULT = 0;
        // Using this api to get an accessToken of specific Windows User by its user name and password
        [DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
        static public extern bool LogonUser(string userName, string domain, string passWord, int logonType, int logonProvider, ref IntPtr accessToken);

        public static WindowsImpersonationContext Impersonate()
            return Impersonate("DEFAULT_USER","DEFAULT_DOMAIN","DEFAULT_PASSWORD");
        public static WindowsImpersonationContext Impersonate(string username, string domain, string password)
            IntPtr accessToken = IntPtr.Zero;
            var success = LogonUser(username, domain, password, LOGON_TYPE_INTERACTIVE, LOGON_TYPE_PROVIDER_DEFAULT, ref accessToken);

            if (!success)
                return null;

            WindowsIdentity identity = new WindowsIdentity(accessToken);

            return identity.Impersonate();
// ..........

Loading images last with jQuery

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

There are lots of ways to make your webpages faster and more responsive. YSlow is a great tool to help you find many great ways to make a particular page faster.

One of the best things you can do is reduce the number of requests (css/js/images/etc) to the server. Typically, this would mean that you would combine files – merge all of your JS and CSS (and minify while you are at it), and use CSS Sprites to combine images.

One major problem of using CSS Sprites is that it can be quite painful to maintain. Over time, if you want to add or change some of your images – you basically need to rebuild and replace the combined images and all of the CSS rules specifying coordinates. Sometimes, this makes the CSS Sprite technique unreasonable to implement.

In one such case, we had about 50 images in one application that were causing the page to take a long time to load. These images were previews of some different design choices that the user could make. The design choices themselves (and their previews) were database driven so that we can add new designs through an admin interface. So, CSS Spriteing the previews would seriously hamper that flexibility.

One other design consideration was that the previews weren’t that important – the page was fully functional and usable without the images. In fact, the designs weren’t even visible until you toggled the design menu.

There is a lazy loader plugin for jQuery already available here – but it didn’t fit our needs. Instead of skipping images in order to get the page working as soon as possible (and initiate the load once the page is usable) – it is made to skip loading offscreen images until they are scrolled into view. It might have somewhat worked for our needs – but I thought it was better to load the images as soon as possible, instead of waiting for the design menu to be expanded to initiate the load. That way, most of the time the designs would be visible by the time they open the menu – but it wouldn’t interfere with the rest of the interface.

My solution was to set the src for all of the previews to a single animated loading image – like one you can get here. Then, I set a custom attribute on the image for the real preview’s url. Finally, some jQuery code runs after the page is done loading which replaces each src attribute with the url in the custom attribute, which will load the real image.

Sample HTML:

    <li templateid="7bcf8f23-fdd0-45c5-a429-d2ffb59e47f0" class="selected"><span>3D Dots
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/7bcf8f23-fdd0-45c5-a429-d2ffb59e47f0/preview.jpg"
            class="deferredLoad" alt="3D Dots Dark" />
    <li templateid="b1a09e28-629e-472a-966e-fc98fc269607"><span>3D Dots Lite</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/b1a09e28-629e-472a-966e-fc98fc269607/preview.jpg"
            class="deferredLoad" alt="3D Dots Lite" />
    <li templateid="e121d26a-9c8f-466f-acc7-9a79d5e8cfa9"><span>Beauty</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/e121d26a-9c8f-466f-acc7-9a79d5e8cfa9/preview.jpg"
            class="deferredLoad" alt="Beauty" />
    <li templateid="322e4c7a-33e7-4e05-bb72-c4076a83a3d0"><span>Black and White</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/322e4c7a-33e7-4e05-bb72-c4076a83a3d0/preview.jpg"
            class="deferredLoad" alt="Black and White" />
    <li templateid="57716da9-91ef-4cf0-82f1-722d0770ad7f"><span>Blank</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/57716da9-91ef-4cf0-82f1-722d0770ad7f/preview.jpg"
            class="deferredLoad" alt="Blank" />
    <li templateid="a79e1136-db47-4acd-be3e-2daf4522796d"><span>Blue Leaves</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/a79e1136-db47-4acd-be3e-2daf4522796d/preview.jpg"
            class="deferredLoad" alt="Blue Leaves" />
    <li templateid="03cb737d-4da7-46d5-b4e4-5ad4b4a3aaf4"><span>Blue Open</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/03cb737d-4da7-46d5-b4e4-5ad4b4a3aaf4/preview.jpg"
            class="deferredLoad" alt="Blue Open" />
    <li templateid="899dff2f-38ba-44f7-9fe2-af66e62674a4"><span>Compass</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/899dff2f-38ba-44f7-9fe2-af66e62674a4/preview.jpg"
            class="deferredLoad" alt="Compass" />

Sample javascript:

        $("img.deferredLoad").each(function() {
            var $this = $(this);
            $this.attr("src", $this.attr("deferredSrc")).removeClass("deferredLoad");

Unexpected benefits of Precompilation of LINQ

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

I once had a manager who told me – I can solve any maintenance problem by adding a layer of abstraction.  I can solve any performance problem by removing a layer of abstraction.

I think LINQ to SQL is a wonderful way to abstract the persistence layer elegant, easy to use, easy to manipulate, and easy to maintain lines of code.  Instead of writing SQL which amounts to “how to retrieve” the data – you manipulate an expression tree that gets closer to specifying “what data I want”.  The upside of this is huge – you can change the expression tree at any level of your code, and let .NET decide how to best write the SQL at the last possible moment – which effectively gits rid of dealing with intermediate results and inefficiently written SQL.  Unfortunately, this abstraction does indeed cause a performance hit – the translation/compilation of the tree to SQL – and it’s probably much bigger than you would think.  See http://peterkellner.net/2009/05/06/linq-to-sql-slow-performance-compilequery-critical/ to see what I mean.  In my analysis (using ANTS Profiler), when using uncompiled LINQ – the performance hit is usually about 80% compilation and only 20% retrieving the data!  Thankfully, .NET does allow you to precompile a LINQ query and save the compilation to use over and over again.

Your natural tendency when hearing those kind of numbers might be to precompile every single LINQ query you write.  There’s a big downside to doing that, though – you lose the ability to manipulate the compiled query in other parts of your code.  Another downside is that the precompilation code itself is fairly ugly and hard to read/maintain.

I’m a big believer in avoiding “premature optimization”.  What happens if you precompile everything, and in a version or two Microsoft resolves the issue and caches compilations for you behind the scenes?  You have written a ton of ugly code that breaks a major benefit of LINQ to SQL and is totally unnecessary.

Instead, I recommend you go after the low hanging fruit first – precompile the most frequently accessed queries in your application and the ones that gain no benefit from manipulating the expression tree.  In the applications I work on – there is a perfect case that fits both of these – the “get” method that returns the LINQ object representation of a single row in the database.  These are hit quite often – and there is absolutely no case where the expression tree is further refined.

The old way it was written:

	public static Item Get(int itemid) {
		return (from i in DataContext.Items
			where i.ItemID == itemid
		select i).First();

The new way with Precompiled LINQ:

	private static Func<ModelDataContext, int, Item>
		GetQuery = CompiledQuery.Compile(
			(ModelDataContext DataContext, int itemid) =>

				(from i in DataContext.Items
					where i.ItemID == itemid
				select i).First()


	public static Item Get(int itemid) {
		return GetQuery.Invoke(DataContext,itemid);

Applying this fairly simple change to the application, I’d estimate we got 80%+ of the benefits of compiled LINQ, at the expense of a few extra lines of code per object/table and absolutely no loss of the tree manipulation.