Selenium Regression Testing Part II – Tips and Tricks

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

In my last post, I talked about how you can use Selenium to do real regressions tests for web applications. It’s a great way to automate testing the real user experience, and not just the backend stuff.

That said, Selenium is a relatively new technology, and it’s not without its issues. When building your first test you might find a lot of times where it’s a trial and error process. There are so many different ways to do the same test, it can be overwhelming. Often times, one or more of the ways you try won’t work. I’m going to try to list some of the common problems I ran into and tips I found below.

  • Selenium Commands
    • waitForElementPresent
      • this is will block your test proceeding until it finds the identified element.  It checks once a second for 30 seconds.
      • this was the most common way I dealt with ‘ajax’ type interactions where it takes an unknown period of time for something to show up
      • I also use it generally instead of verifyElementPresent – it does basically the same thing with a little wiggle room
    • mouseDown/mouseUp/mousePressed/click
      • mostly equivalent, but sometimes you need to match the desired event to a javascript handler
      • try click first.  If it doesn’t work the way you want, move on to mouseDown and so on.
    • waitForFrameToLoad/selectFrame
      • important if you use iFrames (modal dialog, etc.)
      • the selenium selectors only hit the current frame, otherwise you have to select the correct frame
      • an easy way to get back to the root window is to do selectFrame null
    • type vs. typeKeys
      • type fills out an input in code – if it works, use this.  You can use this, and then fire a single typeKeys for the last character if you need an event to be triggered.
      • typeKeys fires the key events on the element
        • has some idiosyncracies – certain letters (I’m looking at you, ‘y’) are reserved to do other special keypresses
        • necessary if you are using a wysiwyg ‘designmode’ type box instead of a standard input
    • verifyX vs. assertX
      • if verify fails, the test continues (and is marked as errored).  If assert fails, the test aborts.
      • Usually verify is better, unless one task blocks a future one from functioning correctly
    • runScript vs. Eval vs. Expression
      • runScript inserts the javascript you provide into the current frame/window.  Useful for those things selenium doesn’t support – like moving a cursor around and selecting text in a wysiwyg
      • Eval runs javascript in the context of selenium. Handy for complex checks – use waitForEval (for instance, checking the css background image property of a particular element)
        • Use this.browserbot.findeElement(“selenium selector”) to find elements the way selenium would
        • Use window.X To access the current frame/window context objects
      • Expression is similar to Eval, but uses Selenium’s custom expression format instead of javascript (but you can combine with javascript by using  javascript{}
        • storedVars[‘key’] allows you to get to a variable you created with a Selenium ‘store’ expression
    • selectPopUp
      • useful for checking stuff in a popup that was initiated
      • Easiest to get by the html title of the popup, but do a ‘pause’ first to let it load
  • Selenium Selectors and XPath
    • In general, be as abstract as possible.
      • Don’t select individual server generated ids (hand crafted html ids are ok if you don’t expect them to change)
      • Don’t select on complicated relationships ( /div[0]/div[2]/a[4] ) – your html structure will change and you’ll have to maintain it
      • Select links by the simple link=text when possible – easy to read/maintain, unlikely to change
      • Use //that (any decendant) instead of /this/that where possible
      • .  references ‘this’ element.  Helps to select something with a particular text:   //div[@id=’publish-private-shares’]//p[.=’This is pretty cool.’]
      • Contains() is useful if you don’t know the exact text (for instance, when an element has multiple css classes):     //div[@id=’pageContent’ and contains(@class,’contenteditable’) and h2=’Goals’]/p[1]
  • Selenium RC
    • While you can use Selenium IDE to create a c# version of your tests – if you do so, you have two tests to maintain.  You can run your ‘selenese’ tests directly with RC, too.
      • JAVAPATH\java.exe –jar SELENIUMPATH\selenium-server.jar –htmlSuite “*BROWSER” “BASESITEURL” “SUITEFILEPATH” “RESULTSFILEPATH”
      • I’ve written a simple csharp console project that automatically finds the correct javapath and fires up the test when you run it.  If people ask in the comments, I’ll post it.
    • Last I checked, Chrome and Safari-Windows don’t work.  Chrome is supposed to be fixed in Selenium RC 1.0.4
  • Sauce RC
    • This is a great UI to help test multiple browsers, but there are a couple of issues
      • Firefox works, but only in dual window mode
      • IE works, but only in single window mode.
      • The ‘timeout’ setting implies a default timeout per action in your test, but it is actually the timeout for your entire test run.  Since it defaults to 30 seconds, you’ll probably want to change it, or else your tests will suddenly die for no reason with no explanation/log.

I’m sure there is probably more I’ve forgotten, so leave a comment if you get stuck and I’ll try to help out if I can.

Web Application Functional Regression Testing Using Selenium

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

At Foliotek, we use a rapid development methodology.  Typically, a new item will go from definition through coding to release in a month’s time (bucketed along with other new items for the month).  A bugfix will nearly always be released within a week of the time it was reported.  In fact, we are currently experimenting with a methodology that will allow us to test and deploy new items individually as well – which means that a new (small) item can go from definition to release in as little as a week, too.

Overall, this kind of workflow is great for us, and great for our customers.  We don’t need to wait a year to change something to make our product more compelling, and customers don’t have to wait a year to get something they want implemented.  We also avoid the shock of suddenly introducing a year’s worth of development to all our customers all at once – a handful of minor changes every month (or week) is much easier to cope with.

However, it also means that Foliotek is never exactly the same as it was the week before.  Every time something changes, there is some risk that something breaks.   We handle this risk in two ways:

  1. We test extremely thoroughly
  2. We fix any problems that arise within about a week (severe problems usually the same day)

At first, we did all testing manually.  This is the best way to test, assuming you have enough good testers with enough time to do it well.  Good testers can’t be just anyone – they have to have a thorough knowledge of how the system should work, they have to care that it does work perfectly, and they have to have a feel for how they might try to break things.  Having enough people like this with enough time to do testing is expensive.

Over time two related things happened.  One was that we added more developers to the project, and started building more faster.  Two was that the system was growing bigger and more complex.

As more people developed on it and the system grew more complex, our testing needs grew exponentially.  The rise in complexity and people developing led to much, much more potential for side-effects – problems where one change affects a different (but subtly related) subsystem.  Side-effects by their nature are impossible to predict.  The only way to catch them was to test EVERYTHING any time ANYTHING changed.

We didn’t have enough experienced testers to do that every month (new development release) let alone every week (bugfix release).

To deal with that, we started by writing a manual regression test script to run through each week.  While this didn’t free up any time overall – it did mean that once the test was written well, anyone could execute it.  This was doable, because we had interns who had to be around to help handle support calls anyways – and they were only intermittently busy.  In their free time they could execute the tests.

Another route we could have gone would have been to write automated unit tests (http://en.wikipedia.org/wiki/Unit_testing).  Basically, these are tiny contracts the developers would write that say something like “calling the Add function on the User class with name Luke will result in the User database table having a new row with name Luke”.  Each time the project is built, the contracts are verified.  This is great for projects like code libraries and APIs where the product of the project IS the result of each function.  For a web application, though, the product is the complex interaction of functions and how they produce an on screen behavior.  There are lots of ways that the individual functions could all be correct and the behavior still fails.  It is also very difficult to impossible to test client-side parts of a web application – javascript, AJAX, CSS, etc.  Unit testing would cost a non trivial amount (building and maintaining the tests) for a trivial gain.

Eventually, we discovered the Selenium project (http://seleniumhq.org/download/).  The idea of Selenium is basically to take our manual regression test scripts, and create them such that a computer can automatically run the tests in a browser (pretty much) just like a human tester would.  This allows us to greatly expand our regression test coverage, and run it for every single change we make and release.

Here are the Selenium tools we use and what we use them for:

  • Selenium IDE (http://release.seleniumhq.org/selenium-ide/) : A Firefox plugin that lets you quickly create tests using a ‘record’ function that builds it out of your clicks, lets you manually edit to make your tests more complex, and runs them in Firefox.
  • Selenium RC (http://selenium.googlecode.com/files/selenium-remote-control-1.0.3.zip):  A java application that will take the tests you create with Selenium IDE, and run them in multiple browsers (firefox, ie, chrome, etc).  It runs from the command line, so its fairly easy to automate test runs into build actions/etc as well.
  • Sauce RC (http://saucelabs.com/downloads): A fork of RC that adds a web ui on top of the command line interface.  It’s useful for quickly debugging tests that don’t execute properly in non-firefox browsers.  It also integrates with SauceLabs – a service that lets you run your tests in the cloud on multiple operating systems and browsers (for a fee).
  • BrowserMob (http://browsermob.com/performance-testing): An online service that will take your selenium scripts and use them to generate real user traffic on your site.  Essentially, it spawns off as many real machines and instances of FireFox at once to run your test – each just as you would do locally – for a fee.  It costs less than $10 to test up to 25 “real browser users” – which actually can map to many more users than that since the automated test doesn’t have to think between clicks.  It gets expensive quickly to test more users than that.

Selenium is a huge boon for us.  We took the manual tests that would occupy a tester for as much as a day, and made it possible to run those same tests with minimal interaction in a half hour or less.  We’ll be able to cover more test cases, and run it more – even running them as development occurs to catch issues earlier.

In my next post, I’ll talk about the details of how you build tests, run them, maintain them, etc. with the tools mentioned above.

Threading with Impersonation in an ASP.NET Project

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Every once in a while, you might run into a need to do something that takes some time in a web app, but doesn’t require user interaction. Maybe you are processing an uploaded file (rescaling images, unzipping, etc). Maybe you are rewriting some statistical data based on new posts. Basically, something that takes minutes or hours – but isn’t that important to be interactive with the user.

You could set up a “job” in a database to be run the next time your timer runs (see https://lanitdev.wordpress.com/2010/03/16/running-a-scheduled-task/). If you don’t have a timer yet, though, that can be overkill if you don’t care that multiple jobs may run at once.

In my case, I needed to export a large volume of data to a zip file. I asked up front for an email address – and the user will receive a link to the completed zip in an email later. The job would only be performed by admins, and even then only about once a year – so there was no need to schedule the job – I could just fire it off when the user requested it.

Any easy way to do this is to use the .NET threading objects in System.Threading. Because I need to save a file, I also have one additional issue – Threads don’t automatically run under the same account that the site does, so I had to include code to impersonate a user that has write permissions.

Here’s a bit of code to get you started:

// param class to pass multiple values
private class ExportParams
        {
            public int UserID { get; set; }
            public string Email { get; set; }
            public string ImpersonateUser { get; set; }
            public string ImpersonateDomain { get; set; }
            public string ImpersonatePassword { get; set; }
        }

        protected void btnExport_Click(object sender, EventArgs e)
        {
//  .... code to get current app user, windows user to impersonate .....

            Thread t = new Thread(new ParameterizedThreadStart(DoExport));
            t.Start(new ExportParams(){
                UserID=CurrentUserID,
                Email=txtEmail.Text,
                ImpersonateUser = username,
                ImpersonateDomain = domain,
                ImpersonatePassword = password
            });
             // show user 'processing' message .....
         }

        private void DoExport(object param)
        {
            ExportParams ep = (ExportParams)param;

            using(var context = Security.Impersonate(ep.ImpersonateUser , ep.ImpersonateDomain,
             ep.ImpersonatePassword ))
          {
            // do the work here..............
          }
        }

Here’s the relevant part of the Security class that does the impersonation:

using System.Runtime.InteropServices;
using System.Security.Principal;
// .....
public class Security {
//.............
public const int LOGON_TYPE_INTERACTIVE = 2;
        public const int LOGON_TYPE_PROVIDER_DEFAULT = 0;
        // Using this api to get an accessToken of specific Windows User by its user name and password
        [DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
        static public extern bool LogonUser(string userName, string domain, string passWord, int logonType, int logonProvider, ref IntPtr accessToken);

        public static WindowsImpersonationContext Impersonate()
        {
            return Impersonate("DEFAULT_USER","DEFAULT_DOMAIN","DEFAULT_PASSWORD");
        }
        public static WindowsImpersonationContext Impersonate(string username, string domain, string password)
        {
            IntPtr accessToken = IntPtr.Zero;
            //accessToken.Debug();
            var success = LogonUser(username, domain, password, LOGON_TYPE_INTERACTIVE, LOGON_TYPE_PROVIDER_DEFAULT, ref accessToken);

            //accessToken.Debug();
            if (!success)
                return null;

            WindowsIdentity identity = new WindowsIdentity(accessToken);

            return identity.Impersonate();
        }
// ..........
}

Simple Usability Testing with TryMyUI.com

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Any software developer that is concerned with making great software should be testing their interfaces on their users (or people similar to their users). Its easy to convince yourself you’ve built a great UI if you only see people on the team using it. People on the team have intimate knowledge of how the system is supposed to work, and so they nearly always succeed.

The real test comes when you watch someone who has never seen your system before attempt to perform some vague tasks you have set out for them to do.  These types of usability tests can not only point out the failures of your system to match what typical users expect – but watching these users struggle with stuff you wrote can be a strong motivator to improve things.  For that reason, it’s important to do usability tests as often as possible, and let as many people watch as possible.

At Lanit, we’ve attempted to do usability tests a number of ways:

We’ve done full scale tests that record mouse tracking and video in addition to audio. These take a ton of time to set up, to edit/summarize the results, and to share. And, it doesn’t seem that the mouse tracking or video added much to what the user was saying (assuming the moderator was prompting for their thoughts often). This time investment caused us to do them at most once or twice a year.

We’ve tried doing simpler screen-cast and voice recordings at a local university. This was an improvement – but it still took time to schedule a room, and we still need to bring back videos to the office to share or spend time summarizing/editing them. It also eat up a half day for two people to find participants and set up the tests. And, it was a bit awkward to approach people to ask them to let you record them testing your software.  Usually, we managed to do these every few months.

We’ve also tried bringing users in to our office – so that we can easily share the screen and voice live, and have an immediate discussion afterwords (similar to what Steve Krug suggests here: http://network.businessofsoftware.org/video/steve-krug-on-the-least-you). This was an improvement, but still required a time investment in finding willing users to come to us.  We only tried this once – but the effort to advertise etc. felt like we would still only do a test at most once a month.

Finally, we decided to give one of the new services that have recently popped up a try. There are many of these services that do some form of usability testing for you – but we narrowed in down to usertesting.com and trymyui.com because they seemed to best emulate what we were doing live  (the only real difference is that users are screened/trained by the site to speak what they are thinking, instead of requiring a live moderator to prompt them). I chose trymyui mainly because I liked the results of my first (free) test, and it was slightly cheaper ($25/user/test instead of $39). All we had to do was provide the script of tasks they were to accomplish (took all of maybe 10 or 15 minutes to create), request testers, and usually within an hour or two we had a great recording of the users screen and voice. I had one experience where a video didn’t arrive for about 2 days, but their support was very helpful and indicated that they pre-screened the results to ensure the testers tried the tasks, etc (a bonus, as you are basically guaranteed to get a good video for your $25).

We were hoping that going the online service route would save us a bunch of time – and it did – but an unexpected benefit was that the short turnaround allowed us to do a kind of A/B testing.  Before when we did live tests – we’d come away with 10 or so items from across 3-5 tests in our session we’d want to improve, and we’d put them in a list to take care of some time in the future.  Then, we’d have to wait until the next time to see if we made a significant improvement and what the next set of problems were.

With trymyui, we could often develop a change to improve the experience and test that change the same day.  This created an even more motivating experience to build a great UI – because you could often see where the next user did better because  of the last test.  In the end, we made several improvements over a week that would have taken us months to do before.  And, it was so effortless to set up and so easy to see the benefits that I know we will continue to use this site to test our UIs on a regular basis.

Handy ASP.NET Debug Extension Method

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Most of the programmers I know (myself included) don’t bother with the built in Visual Studio debugging tools. They are slow and resource intensive. Usually, its more efficient to just do one or more Response.Write calls to see key data at key steps.

That can be a hassle, though. Most objects don’t print very well. You have to create a loop or write some LINQ/String.Join to write items in a collection.

Inspiration struck – couldn’t I write an extension method on object to write out a reasonable representation of pretty much anything? I could write out html tables for lists with columns for properties, etc.

Then I thought – I love the javascript debug console in firebug. I can drill down into individual items without being overwhelmed by all of the data at once. Why not have my debug information spit out javascript to write to the debug console? That also keeps it out of the way of the rest of the interface.

Here’s the code:

public static void Debug(this object value)
        {
            if (HttpContext.Current != null)
            {
                HttpContext.Current.Response.Debug(value);
            }

        }

        public static void Debug(this HttpResponse Response, params object[] args)
        {

            new HttpResponseWrapper(Response).Debug(args);
        }
        public static void Debug(this HttpResponseBase Response, params object[] args)
        {

            ((HttpResponseWrapper)Response).Debug(args);
        }
        public static void Debug(this HttpResponseWrapper Response, params object[] args)
        {

            if (Response != null && Response.ContentType == "text/html")
            {
                Response.Write("<script type='text/javascript'>");
                Response.Write("if(console&&console.debug){");

                Response.Write("console.debug(" +
                              args.SerializeToJSON() +
                               ");");
                Response.Write("}");
                Response.Write("</script>");
            }
        }

The various overloads allow:

myObject.Debug();
new {message="test",obj=myObject}.Debug();
Response.Debug("some message",myObject,myObject2);
//etc

The only other thing you’ll need is the awesome JSON.NET library for the .SerializeToJSON() call to work (which turns the .NET object into the form javascript can deal with). Get it here. FYI, the library does choke serializing some complex objects, so occasionally you’ll need to simplify before calling debug.

Running a scheduled task inside of a asp.net web application

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Occasionally, there might be some activity in your web application that should trigger the need for some code to execute at a later date. One of the most common cases is sending an email (reminder, change digest, etc), but there are other uses too. You might want to defer some processing intensive activity to off hours. In my most recent case, I needed to check for changes to an purchased/installed application’s db and replicate them to another product every 5 minutes or so.

I’ve solved similar problems before by:

  1. From the web application, inserting a row in a ‘scheduled tasks’ database with a time to execute and a script url to run
  2. Creating an running a windows service somewhere that wakes up every 5 minutes or so, looks at the table for ‘due’ items, and opens a request to the url

This works, but it has some drawbacks.

  • You have to learn how to build and deploy a service.  Not particularly hard, but something a web developer doesn’t really need to know
  • You have to copy or figure out how to share some data access logic between the service and the web application, and maintain changes
  • You have to figure out where to deploy the service
  • You have to have somewhere to deploy the service – if you are using a shared webhost with no RDP you are out of luck
  • It’s hard to be sure the service is running.  It’s easy to forget about if your infrastructure changes.
  • You need to deal with it when the service errors.
  • You have to be careful that the service doesn’t run the same script twice (or make it so it doesn’t hurt anything if it does), in case it gets run on two machines, etc.
  • Many more tiny, but real headaches for dealing with and maintaining a separate but connected project

I really didn’t want to go through all of that again for yet another project.  There had to be a simpler solution.

Thanks to Google, I found this article that led me to use a Cache object expiration to simulate the service.  It’s a hack, but it solved my issue.

Later, I found this StackOverflow post about the same issue/fix.  The comments led me to a System.Timers.Timer solution, which is easier to understand. Here it is:

The global.asax:

     public const int MINUTES_TO_WAIT = 5;

    private string _workerPageUrl = null;
    protected string WorkerPageUrl
    {
        get
        {
            if (_workerPageUrl == null)
                _workerPageUrl = (Application["WebRoot"] + VirtualPathUtility.ToAbsolute("~/DoTimedWork.ashx")).Replace("//", "/").Replace(":/", "://") + "?schedule=true";


            return _workerPageUrl;
        }
    }


    protected void Application_Start(Object sender, EventArgs e)
    {
        Uri reqUri = HttpContext.Current.Request.Url;
        Application["WebRoot"] = new UriBuilder(reqUri.Scheme, reqUri.Host, reqUri.Port).ToString();
        Application["TimerRunning"] = false;

        //StartTimer();   // don't want timer to start unless we call the url (don't run in dev, etc).  Url will be called by montastic for live site.
    }

    private void StartTimer()
    {
        if (!(bool)Application["TimerRunning"]) // don't want multiple timers
        {
            System.Timers.Timer timer = new System.Timers.Timer(MINUTES_TO_WAIT * 60 * 1000);
            timer.AutoReset = false;
            timer.Enabled = true;
            timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed);
            timer.Start();
            Application["TimerRunning"] = true;
        }

    }

    void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
    {
        Application["TimerRunning"] = false;
        System.Net.WebClient client = new System.Net.WebClient();
        // have to issue a request so that there is a context
        // also lets us separate all of the scheduling logic from the work logic
        client.DownloadData(WorkerPageUrl + "&LastSuccessfulRun=" + Server.UrlEncode((CurrentSetting.LastSuccessfulRun ?? DateTime.Now.AddYears(-1)).ToString()));
    }

    protected void Application_BeginRequest(Object sender, EventArgs e)
    {
        if (HttpContext.Current.Request.Path == GetLocalUrl(new Uri(WorkerPageUrl)))
        {
            CurrentSetting.LastRun = DateTime.Now;
            try
            {
                CurrentSetting.RunCount++;
            }
            catch (Exception)
            {
                CurrentSetting.RunCount = 0;  // just in case of an overflow
            }
            SaveSettings();
        }
    }
    protected void Application_EndRequest(Object sender, EventArgs e)
    {
        if (HttpContext.Current.Request.Path == GetLocalUrl(new Uri(WorkerPageUrl)))
        {
            if (HttpContext.Current.Error == null) //
            {
                CurrentSetting.LastSuccessfulRun = DateTime.Now;
                SaveSettings();
            }

            if (HttpContext.Current.Request["schedule"] == "true")// register the next iteration whenever worker finished
                StartTimer();
        }
    }

    void Application_Error(object sender, EventArgs e)
    {
        if (HttpContext.Current.Request.Path == GetLocalUrl(new Uri(WorkerPageUrl)))
        {
            Common.LogException(HttpContext.Current.Error.GetBaseException());
        }

    }
    
    protected class Setting
    {
        public DateTime? LastRun { get; set; }
        public DateTime? LastSuccessfulRun { get; set; }
        public long RunCount { get; set; }
    }
    Setting currentSetting = null;
    protected Setting CurrentSetting
    {
        get
        {
            if (currentSetting == null)
            {
                using (System.Security.Principal.WindowsImpersonationContext imp = Common.Impersonate())
                {
                    System.IO.FileInfo f = new System.IO.FileInfo(HttpContext.Current.Server.MapPath("~/data/settings.xml"));
                    if (f.Exists)
                    {
                        System.Xml.Linq.XDocument doc = System.Xml.Linq.XDocument.Load(f.FullName);
                        currentSetting = (from s in doc.Elements("Setting")
                                          select new Setting()
                                          {
                                              LastRun = DateTime.Parse(s.Element("LastRun").Value),
                                              LastSuccessfulRun = DateTime.Parse(s.Element("LastSuccessfulRun").Value),
                                              RunCount = long.Parse(s.Element("RunCount").Value)
                                          }).First();

                    }
                }
            }

            if (currentSetting == null)
            {
                currentSetting = new Setting()
                {
                    LastRun = null,
                    LastSuccessfulRun = DateTime.Now.AddYears(-1),//ignore older than one year old in test
                    RunCount = 0
                };
            }

            return currentSetting;
        }
        set
        {
            currentSetting = value;
            if (Common.Live)
            {
                using (System.Security.Principal.WindowsImpersonationContext imp = Common.Impersonate())
                {
                    System.IO.DirectoryInfo di = new System.IO.DirectoryInfo(HttpContext.Current.Server.MapPath("~/data"));
                    if (!di.Exists)
                        di.Create();
                    System.Xml.XmlWriter writer = System.Xml.XmlWriter.Create(HttpContext.Current.Server.MapPath("~/data/settings.xml"));
                    try
                    {
                        System.Xml.Linq.XDocument doc = new System.Xml.Linq.XDocument(
                                new System.Xml.Linq.XElement("Setting",
                                    new System.Xml.Linq.XElement("LastRun", currentSetting.LastRun ?? DateTime.Now),
                                    new System.Xml.Linq.XElement("LastSuccessfulRun", currentSetting.LastSuccessfulRun),
                                    new System.Xml.Linq.XElement("RunCount", currentSetting.RunCount)
                                    )
                            );
                        doc.WriteTo(writer);
                    }
                    catch (Exception exc)
                    {
                        Common.LogException(exc);
                    }
                    finally
                    {
                        writer.Flush();
                        writer.Close();
                    }
                }
            }
        }
    }
    protected void SaveSettings()
    {
        CurrentSetting = CurrentSetting; // reset to ensure "setter" code saves to file
    }




 

    private string GetLocalUrl(Uri uri)
    {
        string ret = uri.PathAndQuery;
        if (uri.Query != null && uri.Query.Length>0)
            ret = ret.Replace(uri.Query, "");

        return ret;
    }

DoTimedWork.ashx:

 protected DateTime LastSuccessfulRun
    {
        get
        {
            try
            {
                return DateTime.Parse(HttpContext.Current.Request["LastSuccessfulRun"]);
            }
            catch (Exception) { }
            return DateTime.Now.AddDays(-1);
        }
    }
    
    public void ProcessRequest(HttpContext context)
    {
        if (context.Request["dowork"] != "false") // don't do work if it's just motastic hitting the page (to make sure the  timer is running)
        {
                context.Server.ScriptTimeout = 1800; // 30 minutes

                // do work
        }
        context.Response.Write("done");
    }
 
    public bool IsReusable {
        get {
            return false;
        }
    }

Common.cs

using System.Runtime.InteropServices;
using System.Security.Principal;

    public static  bool Live
    {
        get
        {
            return HttpContext.Current.Request.Url.Host != "localhost";
        }
    }
    public const int LOGON_TYPE_INTERACTIVE = 2;
    public const int LOGON_TYPE_PROVIDER_DEFAULT = 0;
    // Using this api to get an accessToken of specific Windows User by its user name and password
    [DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
    static public extern bool LogonUser(string userName, string domain, string passWord, int logonType, int logonProvider, ref IntPtr accessToken);

    public static WindowsImpersonationContext Impersonate() //run code as a windows user with permissions to write files, etc.
    {

        IntPtr accessToken = IntPtr.Zero;
        LogonUser("REPLACE_WITH_WINDOWS_USER", "", "REPLACE_WITH_WINDOWS_PASSWORD", LOGON_TYPE_INTERACTIVE, LOGON_TYPE_PROVIDER_DEFAULT, ref accessToken);

        WindowsIdentity identity = new WindowsIdentity(accessToken);

        return identity.Impersonate();
    }
    public static void LogException(Exception exc)
    {
        LogActivity(exc.Message + "\n" + exc.StackTrace);
    }
    public static void LogActivity(string message)
    {
        if (Live)
        {
            using (WindowsImpersonationContext imp = Impersonate())
            {
                DirectoryInfo d = new DirectoryInfo(HttpContext.Current.Server.MapPath("~/data/temp/"));
                if (!d.Exists)
                    d.Create();
                var file = File.Create(HttpContext.Current.Server.MapPath("~/data/temp/" + DateTime.Now.Ticks + ".log"));
                try
                {
                    byte[] m = System.Text.Encoding.ASCII.GetBytes(message + "\n");
                    file.Write(m, 0, m.Length);
                }
                catch (Exception exc)
                {
                    byte[] m = System.Text.Encoding.ASCII.GetBytes(exc.Message + "\n" + exc.StackTrace);
                    try
                    {
                        file.Write(m, 0, m.Length);
                    }
                    catch (Exception) { }
                }
                finally
                {
                    file.Flush();
                    file.Close();
                }
            }
        }
        else
        {
            HttpContext.Current.Response.Write(message);
        }
    }

There are some issues with this approach to consider.

  • for very high usage sites or very intensive timed work, the work may put burden you wouldn’t want on your webserver
  • Application_Start only runs after the first request to your site after an app pool recycle, IIS restart, server restart, etc. If your site goes through periods of inactivity, you may or may not care if the Timer executes. If you do, you need to ensure the site is hit regularly in some way. I use the website monitor montastic for this.

So, there are going to be circumstances that make the windows service solution better. You just need to decide whether the benefits of using a service outweigh the pain of developing, maintaining, and deploying it alongside your web application.

Loading images last with jQuery

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

There are lots of ways to make your webpages faster and more responsive. YSlow is a great tool to help you find many great ways to make a particular page faster.

One of the best things you can do is reduce the number of requests (css/js/images/etc) to the server. Typically, this would mean that you would combine files – merge all of your JS and CSS (and minify while you are at it), and use CSS Sprites to combine images.

One major problem of using CSS Sprites is that it can be quite painful to maintain. Over time, if you want to add or change some of your images – you basically need to rebuild and replace the combined images and all of the CSS rules specifying coordinates. Sometimes, this makes the CSS Sprite technique unreasonable to implement.

In one such case, we had about 50 images in one application that were causing the page to take a long time to load. These images were previews of some different design choices that the user could make. The design choices themselves (and their previews) were database driven so that we can add new designs through an admin interface. So, CSS Spriteing the previews would seriously hamper that flexibility.

One other design consideration was that the previews weren’t that important – the page was fully functional and usable without the images. In fact, the designs weren’t even visible until you toggled the design menu.

There is a lazy loader plugin for jQuery already available here – but it didn’t fit our needs. Instead of skipping images in order to get the page working as soon as possible (and initiate the load once the page is usable) – it is made to skip loading offscreen images until they are scrolled into view. It might have somewhat worked for our needs – but I thought it was better to load the images as soon as possible, instead of waiting for the design menu to be expanded to initiate the load. That way, most of the time the designs would be visible by the time they open the menu – but it wouldn’t interfere with the rest of the interface.

My solution was to set the src for all of the previews to a single animated loading image – like one you can get here. Then, I set a custom attribute on the image for the real preview’s url. Finally, some jQuery code runs after the page is done loading which replaces each src attribute with the url in the custom attribute, which will load the real image.

Sample HTML:

<ul>
    <li templateid="7bcf8f23-fdd0-45c5-a429-d2ffb59e47f0" class="selected"><span>3D Dots
        Dark</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/7bcf8f23-fdd0-45c5-a429-d2ffb59e47f0/preview.jpg"
            class="deferredLoad" alt="3D Dots Dark" />
    </li>
    <li templateid="b1a09e28-629e-472a-966e-fc98fc269607"><span>3D Dots Lite</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/b1a09e28-629e-472a-966e-fc98fc269607/preview.jpg"
            class="deferredLoad" alt="3D Dots Lite" />
    </li>
    <li templateid="e121d26a-9c8f-466f-acc7-9a79d5e8cfa9"><span>Beauty</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/e121d26a-9c8f-466f-acc7-9a79d5e8cfa9/preview.jpg"
            class="deferredLoad" alt="Beauty" />
    </li>
    <li templateid="322e4c7a-33e7-4e05-bb72-c4076a83a3d0"><span>Black and White</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/322e4c7a-33e7-4e05-bb72-c4076a83a3d0/preview.jpg"
            class="deferredLoad" alt="Black and White" />
    </li>
    <li templateid="57716da9-91ef-4cf0-82f1-722d0770ad7f"><span>Blank</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/57716da9-91ef-4cf0-82f1-722d0770ad7f/preview.jpg"
            class="deferredLoad" alt="Blank" />
    </li>
    <li templateid="a79e1136-db47-4acd-be3e-2daf4522796d"><span>Blue Leaves</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/a79e1136-db47-4acd-be3e-2daf4522796d/preview.jpg"
            class="deferredLoad" alt="Blue Leaves" />
    </li>
    <li templateid="03cb737d-4da7-46d5-b4e4-5ad4b4a3aaf4"><span>Blue Open</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/03cb737d-4da7-46d5-b4e4-5ad4b4a3aaf4/preview.jpg"
            class="deferredLoad" alt="Blue Open" />
    </li>
    <li templateid="899dff2f-38ba-44f7-9fe2-af66e62674a4"><span>Compass</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/899dff2f-38ba-44f7-9fe2-af66e62674a4/preview.jpg"
            class="deferredLoad" alt="Compass" />
    </li>
</ul>

Sample javascript:

$(function(){
        $("img.deferredLoad").each(function() {
            var $this = $(this);
            $this.attr("src", $this.attr("deferredSrc")).removeClass("deferredLoad");
        });
});