Selenium Regression Testing Part II – Tips and Tricks

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

In my last post, I talked about how you can use Selenium to do real regressions tests for web applications. It’s a great way to automate testing the real user experience, and not just the backend stuff.

That said, Selenium is a relatively new technology, and it’s not without its issues. When building your first test you might find a lot of times where it’s a trial and error process. There are so many different ways to do the same test, it can be overwhelming. Often times, one or more of the ways you try won’t work. I’m going to try to list some of the common problems I ran into and tips I found below.

  • Selenium Commands
    • waitForElementPresent
      • this is will block your test proceeding until it finds the identified element.  It checks once a second for 30 seconds.
      • this was the most common way I dealt with ‘ajax’ type interactions where it takes an unknown period of time for something to show up
      • I also use it generally instead of verifyElementPresent – it does basically the same thing with a little wiggle room
    • mouseDown/mouseUp/mousePressed/click
      • mostly equivalent, but sometimes you need to match the desired event to a javascript handler
      • try click first.  If it doesn’t work the way you want, move on to mouseDown and so on.
    • waitForFrameToLoad/selectFrame
      • important if you use iFrames (modal dialog, etc.)
      • the selenium selectors only hit the current frame, otherwise you have to select the correct frame
      • an easy way to get back to the root window is to do selectFrame null
    • type vs. typeKeys
      • type fills out an input in code – if it works, use this.  You can use this, and then fire a single typeKeys for the last character if you need an event to be triggered.
      • typeKeys fires the key events on the element
        • has some idiosyncracies – certain letters (I’m looking at you, ‘y’) are reserved to do other special keypresses
        • necessary if you are using a wysiwyg ‘designmode’ type box instead of a standard input
    • verifyX vs. assertX
      • if verify fails, the test continues (and is marked as errored).  If assert fails, the test aborts.
      • Usually verify is better, unless one task blocks a future one from functioning correctly
    • runScript vs. Eval vs. Expression
      • runScript inserts the javascript you provide into the current frame/window.  Useful for those things selenium doesn’t support – like moving a cursor around and selecting text in a wysiwyg
      • Eval runs javascript in the context of selenium. Handy for complex checks – use waitForEval (for instance, checking the css background image property of a particular element)
        • Use this.browserbot.findeElement(“selenium selector”) to find elements the way selenium would
        • Use window.X To access the current frame/window context objects
      • Expression is similar to Eval, but uses Selenium’s custom expression format instead of javascript (but you can combine with javascript by using  javascript{}
        • storedVars[‘key’] allows you to get to a variable you created with a Selenium ‘store’ expression
    • selectPopUp
      • useful for checking stuff in a popup that was initiated
      • Easiest to get by the html title of the popup, but do a ‘pause’ first to let it load
  • Selenium Selectors and XPath
    • In general, be as abstract as possible.
      • Don’t select individual server generated ids (hand crafted html ids are ok if you don’t expect them to change)
      • Don’t select on complicated relationships ( /div[0]/div[2]/a[4] ) – your html structure will change and you’ll have to maintain it
      • Select links by the simple link=text when possible – easy to read/maintain, unlikely to change
      • Use //that (any decendant) instead of /this/that where possible
      • .  references ‘this’ element.  Helps to select something with a particular text:   //div[@id=’publish-private-shares’]//p[.=’This is pretty cool.’]
      • Contains() is useful if you don’t know the exact text (for instance, when an element has multiple css classes):     //div[@id=’pageContent’ and contains(@class,’contenteditable’) and h2=’Goals’]/p[1]
  • Selenium RC
    • While you can use Selenium IDE to create a c# version of your tests – if you do so, you have two tests to maintain.  You can run your ‘selenese’ tests directly with RC, too.
      • JAVAPATH\java.exe –jar SELENIUMPATH\selenium-server.jar –htmlSuite “*BROWSER” “BASESITEURL” “SUITEFILEPATH” “RESULTSFILEPATH”
      • I’ve written a simple csharp console project that automatically finds the correct javapath and fires up the test when you run it.  If people ask in the comments, I’ll post it.
    • Last I checked, Chrome and Safari-Windows don’t work.  Chrome is supposed to be fixed in Selenium RC 1.0.4
  • Sauce RC
    • This is a great UI to help test multiple browsers, but there are a couple of issues
      • Firefox works, but only in dual window mode
      • IE works, but only in single window mode.
      • The ‘timeout’ setting implies a default timeout per action in your test, but it is actually the timeout for your entire test run.  Since it defaults to 30 seconds, you’ll probably want to change it, or else your tests will suddenly die for no reason with no explanation/log.

I’m sure there is probably more I’ve forgotten, so leave a comment if you get stuck and I’ll try to help out if I can.

Web Application Functional Regression Testing Using Selenium

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

At Foliotek, we use a rapid development methodology.  Typically, a new item will go from definition through coding to release in a month’s time (bucketed along with other new items for the month).  A bugfix will nearly always be released within a week of the time it was reported.  In fact, we are currently experimenting with a methodology that will allow us to test and deploy new items individually as well – which means that a new (small) item can go from definition to release in as little as a week, too.

Overall, this kind of workflow is great for us, and great for our customers.  We don’t need to wait a year to change something to make our product more compelling, and customers don’t have to wait a year to get something they want implemented.  We also avoid the shock of suddenly introducing a year’s worth of development to all our customers all at once – a handful of minor changes every month (or week) is much easier to cope with.

However, it also means that Foliotek is never exactly the same as it was the week before.  Every time something changes, there is some risk that something breaks.   We handle this risk in two ways:

  1. We test extremely thoroughly
  2. We fix any problems that arise within about a week (severe problems usually the same day)

At first, we did all testing manually.  This is the best way to test, assuming you have enough good testers with enough time to do it well.  Good testers can’t be just anyone – they have to have a thorough knowledge of how the system should work, they have to care that it does work perfectly, and they have to have a feel for how they might try to break things.  Having enough people like this with enough time to do testing is expensive.

Over time two related things happened.  One was that we added more developers to the project, and started building more faster.  Two was that the system was growing bigger and more complex.

As more people developed on it and the system grew more complex, our testing needs grew exponentially.  The rise in complexity and people developing led to much, much more potential for side-effects – problems where one change affects a different (but subtly related) subsystem.  Side-effects by their nature are impossible to predict.  The only way to catch them was to test EVERYTHING any time ANYTHING changed.

We didn’t have enough experienced testers to do that every month (new development release) let alone every week (bugfix release).

To deal with that, we started by writing a manual regression test script to run through each week.  While this didn’t free up any time overall – it did mean that once the test was written well, anyone could execute it.  This was doable, because we had interns who had to be around to help handle support calls anyways – and they were only intermittently busy.  In their free time they could execute the tests.

Another route we could have gone would have been to write automated unit tests (http://en.wikipedia.org/wiki/Unit_testing).  Basically, these are tiny contracts the developers would write that say something like “calling the Add function on the User class with name Luke will result in the User database table having a new row with name Luke”.  Each time the project is built, the contracts are verified.  This is great for projects like code libraries and APIs where the product of the project IS the result of each function.  For a web application, though, the product is the complex interaction of functions and how they produce an on screen behavior.  There are lots of ways that the individual functions could all be correct and the behavior still fails.  It is also very difficult to impossible to test client-side parts of a web application – javascript, AJAX, CSS, etc.  Unit testing would cost a non trivial amount (building and maintaining the tests) for a trivial gain.

Eventually, we discovered the Selenium project (http://seleniumhq.org/download/).  The idea of Selenium is basically to take our manual regression test scripts, and create them such that a computer can automatically run the tests in a browser (pretty much) just like a human tester would.  This allows us to greatly expand our regression test coverage, and run it for every single change we make and release.

Here are the Selenium tools we use and what we use them for:

  • Selenium IDE (http://release.seleniumhq.org/selenium-ide/) : A Firefox plugin that lets you quickly create tests using a ‘record’ function that builds it out of your clicks, lets you manually edit to make your tests more complex, and runs them in Firefox.
  • Selenium RC (http://selenium.googlecode.com/files/selenium-remote-control-1.0.3.zip):  A java application that will take the tests you create with Selenium IDE, and run them in multiple browsers (firefox, ie, chrome, etc).  It runs from the command line, so its fairly easy to automate test runs into build actions/etc as well.
  • Sauce RC (http://saucelabs.com/downloads): A fork of RC that adds a web ui on top of the command line interface.  It’s useful for quickly debugging tests that don’t execute properly in non-firefox browsers.  It also integrates with SauceLabs – a service that lets you run your tests in the cloud on multiple operating systems and browsers (for a fee).
  • BrowserMob (http://browsermob.com/performance-testing): An online service that will take your selenium scripts and use them to generate real user traffic on your site.  Essentially, it spawns off as many real machines and instances of FireFox at once to run your test – each just as you would do locally – for a fee.  It costs less than $10 to test up to 25 “real browser users” – which actually can map to many more users than that since the automated test doesn’t have to think between clicks.  It gets expensive quickly to test more users than that.

Selenium is a huge boon for us.  We took the manual tests that would occupy a tester for as much as a day, and made it possible to run those same tests with minimal interaction in a half hour or less.  We’ll be able to cover more test cases, and run it more – even running them as development occurs to catch issues earlier.

In my next post, I’ll talk about the details of how you build tests, run them, maintain them, etc. with the tools mentioned above.

Threading with Impersonation in an ASP.NET Project

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Every once in a while, you might run into a need to do something that takes some time in a web app, but doesn’t require user interaction. Maybe you are processing an uploaded file (rescaling images, unzipping, etc). Maybe you are rewriting some statistical data based on new posts. Basically, something that takes minutes or hours – but isn’t that important to be interactive with the user.

You could set up a “job” in a database to be run the next time your timer runs (see https://lanitdev.wordpress.com/2010/03/16/running-a-scheduled-task/). If you don’t have a timer yet, though, that can be overkill if you don’t care that multiple jobs may run at once.

In my case, I needed to export a large volume of data to a zip file. I asked up front for an email address – and the user will receive a link to the completed zip in an email later. The job would only be performed by admins, and even then only about once a year – so there was no need to schedule the job – I could just fire it off when the user requested it.

Any easy way to do this is to use the .NET threading objects in System.Threading. Because I need to save a file, I also have one additional issue – Threads don’t automatically run under the same account that the site does, so I had to include code to impersonate a user that has write permissions.

Here’s a bit of code to get you started:

// param class to pass multiple values
private class ExportParams
        {
            public int UserID { get; set; }
            public string Email { get; set; }
            public string ImpersonateUser { get; set; }
            public string ImpersonateDomain { get; set; }
            public string ImpersonatePassword { get; set; }
        }

        protected void btnExport_Click(object sender, EventArgs e)
        {
//  .... code to get current app user, windows user to impersonate .....

            Thread t = new Thread(new ParameterizedThreadStart(DoExport));
            t.Start(new ExportParams(){
                UserID=CurrentUserID,
                Email=txtEmail.Text,
                ImpersonateUser = username,
                ImpersonateDomain = domain,
                ImpersonatePassword = password
            });
             // show user 'processing' message .....
         }

        private void DoExport(object param)
        {
            ExportParams ep = (ExportParams)param;

            using(var context = Security.Impersonate(ep.ImpersonateUser , ep.ImpersonateDomain,
             ep.ImpersonatePassword ))
          {
            // do the work here..............
          }
        }

Here’s the relevant part of the Security class that does the impersonation:

using System.Runtime.InteropServices;
using System.Security.Principal;
// .....
public class Security {
//.............
public const int LOGON_TYPE_INTERACTIVE = 2;
        public const int LOGON_TYPE_PROVIDER_DEFAULT = 0;
        // Using this api to get an accessToken of specific Windows User by its user name and password
        [DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
        static public extern bool LogonUser(string userName, string domain, string passWord, int logonType, int logonProvider, ref IntPtr accessToken);

        public static WindowsImpersonationContext Impersonate()
        {
            return Impersonate("DEFAULT_USER","DEFAULT_DOMAIN","DEFAULT_PASSWORD");
        }
        public static WindowsImpersonationContext Impersonate(string username, string domain, string password)
        {
            IntPtr accessToken = IntPtr.Zero;
            //accessToken.Debug();
            var success = LogonUser(username, domain, password, LOGON_TYPE_INTERACTIVE, LOGON_TYPE_PROVIDER_DEFAULT, ref accessToken);

            //accessToken.Debug();
            if (!success)
                return null;

            WindowsIdentity identity = new WindowsIdentity(accessToken);

            return identity.Impersonate();
        }
// ..........
}

Simple Usability Testing with TryMyUI.com

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Any software developer that is concerned with making great software should be testing their interfaces on their users (or people similar to their users). Its easy to convince yourself you’ve built a great UI if you only see people on the team using it. People on the team have intimate knowledge of how the system is supposed to work, and so they nearly always succeed.

The real test comes when you watch someone who has never seen your system before attempt to perform some vague tasks you have set out for them to do.  These types of usability tests can not only point out the failures of your system to match what typical users expect – but watching these users struggle with stuff you wrote can be a strong motivator to improve things.  For that reason, it’s important to do usability tests as often as possible, and let as many people watch as possible.

At Lanit, we’ve attempted to do usability tests a number of ways:

We’ve done full scale tests that record mouse tracking and video in addition to audio. These take a ton of time to set up, to edit/summarize the results, and to share. And, it doesn’t seem that the mouse tracking or video added much to what the user was saying (assuming the moderator was prompting for their thoughts often). This time investment caused us to do them at most once or twice a year.

We’ve tried doing simpler screen-cast and voice recordings at a local university. This was an improvement – but it still took time to schedule a room, and we still need to bring back videos to the office to share or spend time summarizing/editing them. It also eat up a half day for two people to find participants and set up the tests. And, it was a bit awkward to approach people to ask them to let you record them testing your software.  Usually, we managed to do these every few months.

We’ve also tried bringing users in to our office – so that we can easily share the screen and voice live, and have an immediate discussion afterwords (similar to what Steve Krug suggests here: http://network.businessofsoftware.org/video/steve-krug-on-the-least-you). This was an improvement, but still required a time investment in finding willing users to come to us.  We only tried this once – but the effort to advertise etc. felt like we would still only do a test at most once a month.

Finally, we decided to give one of the new services that have recently popped up a try. There are many of these services that do some form of usability testing for you – but we narrowed in down to usertesting.com and trymyui.com because they seemed to best emulate what we were doing live  (the only real difference is that users are screened/trained by the site to speak what they are thinking, instead of requiring a live moderator to prompt them). I chose trymyui mainly because I liked the results of my first (free) test, and it was slightly cheaper ($25/user/test instead of $39). All we had to do was provide the script of tasks they were to accomplish (took all of maybe 10 or 15 minutes to create), request testers, and usually within an hour or two we had a great recording of the users screen and voice. I had one experience where a video didn’t arrive for about 2 days, but their support was very helpful and indicated that they pre-screened the results to ensure the testers tried the tasks, etc (a bonus, as you are basically guaranteed to get a good video for your $25).

We were hoping that going the online service route would save us a bunch of time – and it did – but an unexpected benefit was that the short turnaround allowed us to do a kind of A/B testing.  Before when we did live tests – we’d come away with 10 or so items from across 3-5 tests in our session we’d want to improve, and we’d put them in a list to take care of some time in the future.  Then, we’d have to wait until the next time to see if we made a significant improvement and what the next set of problems were.

With trymyui, we could often develop a change to improve the experience and test that change the same day.  This created an even more motivating experience to build a great UI – because you could often see where the next user did better because  of the last test.  In the end, we made several improvements over a week that would have taken us months to do before.  And, it was so effortless to set up and so easy to see the benefits that I know we will continue to use this site to test our UIs on a regular basis.

Handy ASP.NET Debug Extension Method

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Most of the programmers I know (myself included) don’t bother with the built in Visual Studio debugging tools. They are slow and resource intensive. Usually, its more efficient to just do one or more Response.Write calls to see key data at key steps.

That can be a hassle, though. Most objects don’t print very well. You have to create a loop or write some LINQ/String.Join to write items in a collection.

Inspiration struck – couldn’t I write an extension method on object to write out a reasonable representation of pretty much anything? I could write out html tables for lists with columns for properties, etc.

Then I thought – I love the javascript debug console in firebug. I can drill down into individual items without being overwhelmed by all of the data at once. Why not have my debug information spit out javascript to write to the debug console? That also keeps it out of the way of the rest of the interface.

Here’s the code:

public static void Debug(this object value)
        {
            if (HttpContext.Current != null)
            {
                HttpContext.Current.Response.Debug(value);
            }

        }

        public static void Debug(this HttpResponse Response, params object[] args)
        {

            new HttpResponseWrapper(Response).Debug(args);
        }
        public static void Debug(this HttpResponseBase Response, params object[] args)
        {

            ((HttpResponseWrapper)Response).Debug(args);
        }
        public static void Debug(this HttpResponseWrapper Response, params object[] args)
        {

            if (Response != null && Response.ContentType == "text/html")
            {
                Response.Write("<script type='text/javascript'>");
                Response.Write("if(console&&console.debug){");

                Response.Write("console.debug(" +
                              args.SerializeToJSON() +
                               ");");
                Response.Write("}");
                Response.Write("</script>");
            }
        }

The various overloads allow:

myObject.Debug();
new {message="test",obj=myObject}.Debug();
Response.Debug("some message",myObject,myObject2);
//etc

The only other thing you’ll need is the awesome JSON.NET library for the .SerializeToJSON() call to work (which turns the .NET object into the form javascript can deal with). Get it here. FYI, the library does choke serializing some complex objects, so occasionally you’ll need to simplify before calling debug.

Running a scheduled task inside of a asp.net web application

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Occasionally, there might be some activity in your web application that should trigger the need for some code to execute at a later date. One of the most common cases is sending an email (reminder, change digest, etc), but there are other uses too. You might want to defer some processing intensive activity to off hours. In my most recent case, I needed to check for changes to an purchased/installed application’s db and replicate them to another product every 5 minutes or so.

I’ve solved similar problems before by:

  1. From the web application, inserting a row in a ‘scheduled tasks’ database with a time to execute and a script url to run
  2. Creating an running a windows service somewhere that wakes up every 5 minutes or so, looks at the table for ‘due’ items, and opens a request to the url

This works, but it has some drawbacks.

  • You have to learn how to build and deploy a service.  Not particularly hard, but something a web developer doesn’t really need to know
  • You have to copy or figure out how to share some data access logic between the service and the web application, and maintain changes
  • You have to figure out where to deploy the service
  • You have to have somewhere to deploy the service – if you are using a shared webhost with no RDP you are out of luck
  • It’s hard to be sure the service is running.  It’s easy to forget about if your infrastructure changes.
  • You need to deal with it when the service errors.
  • You have to be careful that the service doesn’t run the same script twice (or make it so it doesn’t hurt anything if it does), in case it gets run on two machines, etc.
  • Many more tiny, but real headaches for dealing with and maintaining a separate but connected project

I really didn’t want to go through all of that again for yet another project.  There had to be a simpler solution.

Thanks to Google, I found this article that led me to use a Cache object expiration to simulate the service.  It’s a hack, but it solved my issue.

Later, I found this StackOverflow post about the same issue/fix.  The comments led me to a System.Timers.Timer solution, which is easier to understand. Here it is:

The global.asax:

     public const int MINUTES_TO_WAIT = 5;

    private string _workerPageUrl = null;
    protected string WorkerPageUrl
    {
        get
        {
            if (_workerPageUrl == null)
                _workerPageUrl = (Application["WebRoot"] + VirtualPathUtility.ToAbsolute("~/DoTimedWork.ashx")).Replace("//", "/").Replace(":/", "://") + "?schedule=true";


            return _workerPageUrl;
        }
    }


    protected void Application_Start(Object sender, EventArgs e)
    {
        Uri reqUri = HttpContext.Current.Request.Url;
        Application["WebRoot"] = new UriBuilder(reqUri.Scheme, reqUri.Host, reqUri.Port).ToString();
        Application["TimerRunning"] = false;

        //StartTimer();   // don't want timer to start unless we call the url (don't run in dev, etc).  Url will be called by montastic for live site.
    }

    private void StartTimer()
    {
        if (!(bool)Application["TimerRunning"]) // don't want multiple timers
        {
            System.Timers.Timer timer = new System.Timers.Timer(MINUTES_TO_WAIT * 60 * 1000);
            timer.AutoReset = false;
            timer.Enabled = true;
            timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed);
            timer.Start();
            Application["TimerRunning"] = true;
        }

    }

    void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
    {
        Application["TimerRunning"] = false;
        System.Net.WebClient client = new System.Net.WebClient();
        // have to issue a request so that there is a context
        // also lets us separate all of the scheduling logic from the work logic
        client.DownloadData(WorkerPageUrl + "&LastSuccessfulRun=" + Server.UrlEncode((CurrentSetting.LastSuccessfulRun ?? DateTime.Now.AddYears(-1)).ToString()));
    }

    protected void Application_BeginRequest(Object sender, EventArgs e)
    {
        if (HttpContext.Current.Request.Path == GetLocalUrl(new Uri(WorkerPageUrl)))
        {
            CurrentSetting.LastRun = DateTime.Now;
            try
            {
                CurrentSetting.RunCount++;
            }
            catch (Exception)
            {
                CurrentSetting.RunCount = 0;  // just in case of an overflow
            }
            SaveSettings();
        }
    }
    protected void Application_EndRequest(Object sender, EventArgs e)
    {
        if (HttpContext.Current.Request.Path == GetLocalUrl(new Uri(WorkerPageUrl)))
        {
            if (HttpContext.Current.Error == null) //
            {
                CurrentSetting.LastSuccessfulRun = DateTime.Now;
                SaveSettings();
            }

            if (HttpContext.Current.Request["schedule"] == "true")// register the next iteration whenever worker finished
                StartTimer();
        }
    }

    void Application_Error(object sender, EventArgs e)
    {
        if (HttpContext.Current.Request.Path == GetLocalUrl(new Uri(WorkerPageUrl)))
        {
            Common.LogException(HttpContext.Current.Error.GetBaseException());
        }

    }
    
    protected class Setting
    {
        public DateTime? LastRun { get; set; }
        public DateTime? LastSuccessfulRun { get; set; }
        public long RunCount { get; set; }
    }
    Setting currentSetting = null;
    protected Setting CurrentSetting
    {
        get
        {
            if (currentSetting == null)
            {
                using (System.Security.Principal.WindowsImpersonationContext imp = Common.Impersonate())
                {
                    System.IO.FileInfo f = new System.IO.FileInfo(HttpContext.Current.Server.MapPath("~/data/settings.xml"));
                    if (f.Exists)
                    {
                        System.Xml.Linq.XDocument doc = System.Xml.Linq.XDocument.Load(f.FullName);
                        currentSetting = (from s in doc.Elements("Setting")
                                          select new Setting()
                                          {
                                              LastRun = DateTime.Parse(s.Element("LastRun").Value),
                                              LastSuccessfulRun = DateTime.Parse(s.Element("LastSuccessfulRun").Value),
                                              RunCount = long.Parse(s.Element("RunCount").Value)
                                          }).First();

                    }
                }
            }

            if (currentSetting == null)
            {
                currentSetting = new Setting()
                {
                    LastRun = null,
                    LastSuccessfulRun = DateTime.Now.AddYears(-1),//ignore older than one year old in test
                    RunCount = 0
                };
            }

            return currentSetting;
        }
        set
        {
            currentSetting = value;
            if (Common.Live)
            {
                using (System.Security.Principal.WindowsImpersonationContext imp = Common.Impersonate())
                {
                    System.IO.DirectoryInfo di = new System.IO.DirectoryInfo(HttpContext.Current.Server.MapPath("~/data"));
                    if (!di.Exists)
                        di.Create();
                    System.Xml.XmlWriter writer = System.Xml.XmlWriter.Create(HttpContext.Current.Server.MapPath("~/data/settings.xml"));
                    try
                    {
                        System.Xml.Linq.XDocument doc = new System.Xml.Linq.XDocument(
                                new System.Xml.Linq.XElement("Setting",
                                    new System.Xml.Linq.XElement("LastRun", currentSetting.LastRun ?? DateTime.Now),
                                    new System.Xml.Linq.XElement("LastSuccessfulRun", currentSetting.LastSuccessfulRun),
                                    new System.Xml.Linq.XElement("RunCount", currentSetting.RunCount)
                                    )
                            );
                        doc.WriteTo(writer);
                    }
                    catch (Exception exc)
                    {
                        Common.LogException(exc);
                    }
                    finally
                    {
                        writer.Flush();
                        writer.Close();
                    }
                }
            }
        }
    }
    protected void SaveSettings()
    {
        CurrentSetting = CurrentSetting; // reset to ensure "setter" code saves to file
    }




 

    private string GetLocalUrl(Uri uri)
    {
        string ret = uri.PathAndQuery;
        if (uri.Query != null && uri.Query.Length>0)
            ret = ret.Replace(uri.Query, "");

        return ret;
    }

DoTimedWork.ashx:

 protected DateTime LastSuccessfulRun
    {
        get
        {
            try
            {
                return DateTime.Parse(HttpContext.Current.Request["LastSuccessfulRun"]);
            }
            catch (Exception) { }
            return DateTime.Now.AddDays(-1);
        }
    }
    
    public void ProcessRequest(HttpContext context)
    {
        if (context.Request["dowork"] != "false") // don't do work if it's just motastic hitting the page (to make sure the  timer is running)
        {
                context.Server.ScriptTimeout = 1800; // 30 minutes

                // do work
        }
        context.Response.Write("done");
    }
 
    public bool IsReusable {
        get {
            return false;
        }
    }

Common.cs

using System.Runtime.InteropServices;
using System.Security.Principal;

    public static  bool Live
    {
        get
        {
            return HttpContext.Current.Request.Url.Host != "localhost";
        }
    }
    public const int LOGON_TYPE_INTERACTIVE = 2;
    public const int LOGON_TYPE_PROVIDER_DEFAULT = 0;
    // Using this api to get an accessToken of specific Windows User by its user name and password
    [DllImport("advapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
    static public extern bool LogonUser(string userName, string domain, string passWord, int logonType, int logonProvider, ref IntPtr accessToken);

    public static WindowsImpersonationContext Impersonate() //run code as a windows user with permissions to write files, etc.
    {

        IntPtr accessToken = IntPtr.Zero;
        LogonUser("REPLACE_WITH_WINDOWS_USER", "", "REPLACE_WITH_WINDOWS_PASSWORD", LOGON_TYPE_INTERACTIVE, LOGON_TYPE_PROVIDER_DEFAULT, ref accessToken);

        WindowsIdentity identity = new WindowsIdentity(accessToken);

        return identity.Impersonate();
    }
    public static void LogException(Exception exc)
    {
        LogActivity(exc.Message + "\n" + exc.StackTrace);
    }
    public static void LogActivity(string message)
    {
        if (Live)
        {
            using (WindowsImpersonationContext imp = Impersonate())
            {
                DirectoryInfo d = new DirectoryInfo(HttpContext.Current.Server.MapPath("~/data/temp/"));
                if (!d.Exists)
                    d.Create();
                var file = File.Create(HttpContext.Current.Server.MapPath("~/data/temp/" + DateTime.Now.Ticks + ".log"));
                try
                {
                    byte[] m = System.Text.Encoding.ASCII.GetBytes(message + "\n");
                    file.Write(m, 0, m.Length);
                }
                catch (Exception exc)
                {
                    byte[] m = System.Text.Encoding.ASCII.GetBytes(exc.Message + "\n" + exc.StackTrace);
                    try
                    {
                        file.Write(m, 0, m.Length);
                    }
                    catch (Exception) { }
                }
                finally
                {
                    file.Flush();
                    file.Close();
                }
            }
        }
        else
        {
            HttpContext.Current.Response.Write(message);
        }
    }

There are some issues with this approach to consider.

  • for very high usage sites or very intensive timed work, the work may put burden you wouldn’t want on your webserver
  • Application_Start only runs after the first request to your site after an app pool recycle, IIS restart, server restart, etc. If your site goes through periods of inactivity, you may or may not care if the Timer executes. If you do, you need to ensure the site is hit regularly in some way. I use the website monitor montastic for this.

So, there are going to be circumstances that make the windows service solution better. You just need to decide whether the benefits of using a service outweigh the pain of developing, maintaining, and deploying it alongside your web application.

Loading images last with jQuery

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

There are lots of ways to make your webpages faster and more responsive. YSlow is a great tool to help you find many great ways to make a particular page faster.

One of the best things you can do is reduce the number of requests (css/js/images/etc) to the server. Typically, this would mean that you would combine files – merge all of your JS and CSS (and minify while you are at it), and use CSS Sprites to combine images.

One major problem of using CSS Sprites is that it can be quite painful to maintain. Over time, if you want to add or change some of your images – you basically need to rebuild and replace the combined images and all of the CSS rules specifying coordinates. Sometimes, this makes the CSS Sprite technique unreasonable to implement.

In one such case, we had about 50 images in one application that were causing the page to take a long time to load. These images were previews of some different design choices that the user could make. The design choices themselves (and their previews) were database driven so that we can add new designs through an admin interface. So, CSS Spriteing the previews would seriously hamper that flexibility.

One other design consideration was that the previews weren’t that important – the page was fully functional and usable without the images. In fact, the designs weren’t even visible until you toggled the design menu.

There is a lazy loader plugin for jQuery already available here – but it didn’t fit our needs. Instead of skipping images in order to get the page working as soon as possible (and initiate the load once the page is usable) – it is made to skip loading offscreen images until they are scrolled into view. It might have somewhat worked for our needs – but I thought it was better to load the images as soon as possible, instead of waiting for the design menu to be expanded to initiate the load. That way, most of the time the designs would be visible by the time they open the menu – but it wouldn’t interfere with the rest of the interface.

My solution was to set the src for all of the previews to a single animated loading image – like one you can get here. Then, I set a custom attribute on the image for the real preview’s url. Finally, some jQuery code runs after the page is done loading which replaces each src attribute with the url in the custom attribute, which will load the real image.

Sample HTML:

<ul>
    <li templateid="7bcf8f23-fdd0-45c5-a429-d2ffb59e47f0" class="selected"><span>3D Dots
        Dark</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/7bcf8f23-fdd0-45c5-a429-d2ffb59e47f0/preview.jpg"
            class="deferredLoad" alt="3D Dots Dark" />
    </li>
    <li templateid="b1a09e28-629e-472a-966e-fc98fc269607"><span>3D Dots Lite</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/b1a09e28-629e-472a-966e-fc98fc269607/preview.jpg"
            class="deferredLoad" alt="3D Dots Lite" />
    </li>
    <li templateid="e121d26a-9c8f-466f-acc7-9a79d5e8cfa9"><span>Beauty</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/e121d26a-9c8f-466f-acc7-9a79d5e8cfa9/preview.jpg"
            class="deferredLoad" alt="Beauty" />
    </li>
    <li templateid="322e4c7a-33e7-4e05-bb72-c4076a83a3d0"><span>Black and White</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/322e4c7a-33e7-4e05-bb72-c4076a83a3d0/preview.jpg"
            class="deferredLoad" alt="Black and White" />
    </li>
    <li templateid="57716da9-91ef-4cf0-82f1-722d0770ad7f"><span>Blank</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/57716da9-91ef-4cf0-82f1-722d0770ad7f/preview.jpg"
            class="deferredLoad" alt="Blank" />
    </li>
    <li templateid="a79e1136-db47-4acd-be3e-2daf4522796d"><span>Blue Leaves</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/a79e1136-db47-4acd-be3e-2daf4522796d/preview.jpg"
            class="deferredLoad" alt="Blue Leaves" />
    </li>
    <li templateid="03cb737d-4da7-46d5-b4e4-5ad4b4a3aaf4"><span>Blue Open</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/03cb737d-4da7-46d5-b4e4-5ad4b4a3aaf4/preview.jpg"
            class="deferredLoad" alt="Blue Open" />
    </li>
    <li templateid="899dff2f-38ba-44f7-9fe2-af66e62674a4"><span>Compass</span>
        <img src="/static/img/ajax-loader-small.gif" deferredsrc="/resources/899dff2f-38ba-44f7-9fe2-af66e62674a4/preview.jpg"
            class="deferredLoad" alt="Compass" />
    </li>
</ul>

Sample javascript:

$(function(){
        $("img.deferredLoad").each(function() {
            var $this = $(this);
            $this.attr("src", $this.attr("deferredSrc")).removeClass("deferredLoad");
        });
});

Unexpected benefits of Precompilation of LINQ

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

I once had a manager who told me – I can solve any maintenance problem by adding a layer of abstraction.  I can solve any performance problem by removing a layer of abstraction.

I think LINQ to SQL is a wonderful way to abstract the persistence layer elegant, easy to use, easy to manipulate, and easy to maintain lines of code.  Instead of writing SQL which amounts to “how to retrieve” the data – you manipulate an expression tree that gets closer to specifying “what data I want”.  The upside of this is huge – you can change the expression tree at any level of your code, and let .NET decide how to best write the SQL at the last possible moment – which effectively gits rid of dealing with intermediate results and inefficiently written SQL.  Unfortunately, this abstraction does indeed cause a performance hit – the translation/compilation of the tree to SQL – and it’s probably much bigger than you would think.  See http://peterkellner.net/2009/05/06/linq-to-sql-slow-performance-compilequery-critical/ to see what I mean.  In my analysis (using ANTS Profiler), when using uncompiled LINQ – the performance hit is usually about 80% compilation and only 20% retrieving the data!  Thankfully, .NET does allow you to precompile a LINQ query and save the compilation to use over and over again.

Your natural tendency when hearing those kind of numbers might be to precompile every single LINQ query you write.  There’s a big downside to doing that, though – you lose the ability to manipulate the compiled query in other parts of your code.  Another downside is that the precompilation code itself is fairly ugly and hard to read/maintain.

I’m a big believer in avoiding “premature optimization”.  What happens if you precompile everything, and in a version or two Microsoft resolves the issue and caches compilations for you behind the scenes?  You have written a ton of ugly code that breaks a major benefit of LINQ to SQL and is totally unnecessary.

Instead, I recommend you go after the low hanging fruit first – precompile the most frequently accessed queries in your application and the ones that gain no benefit from manipulating the expression tree.  In the applications I work on – there is a perfect case that fits both of these – the “get” method that returns the LINQ object representation of a single row in the database.  These are hit quite often – and there is absolutely no case where the expression tree is further refined.

The old way it was written:

	public static Item Get(int itemid) {
		return (from i in DataContext.Items
			where i.ItemID == itemid
		select i).First();
	}

The new way with Precompiled LINQ:

	private static Func<ModelDataContext, int, Item>
		GetQuery = CompiledQuery.Compile(
			(ModelDataContext DataContext, int itemid) =>

				(from i in DataContext.Items
					where i.ItemID == itemid
				select i).First()

				);

	public static Item Get(int itemid) {
		return GetQuery.Invoke(DataContext,itemid);
	}

Applying this fairly simple change to the application, I’d estimate we got 80%+ of the benefits of compiled LINQ, at the expense of a few extra lines of code per object/table and absolutely no loss of the tree manipulation.

Adapting a Development Process

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

One of the key reasons to choose a rapid development philosophy over a waterfall development philosophy is the ability to adapt to changing requirements.  Once you decide to go with some form of rapid development – which should you choose?  It seems like there are as many options as there are companies utilizing them.

There is no one correct answer.  My advice is to pick and choose what works best for your development team, your product, your customers, and your company.  In addition to adopting a development philosophy that allows rapid change in requirements of your software – be prepared to rapidly change the process itself.  Our process has changed dramatically over my years at Lanit due to new people, new priorities, new technologies, and new tools.

At Lanit, we are currently using a form of Agile software development.  We have very few rigid processes that the design/development team must follow – instead we have a toolbox from which we grab the appropriate tool from in any situation.  Below I will give an overview of a lot of things we do here.  Hopefully some of the things we do will be useful for you/your team.

Planning

Some agile shops will say that you should avoid all planning.  I don’t think that scales well to complicated problems  (and, I don’t really believe they don’t plan – they just have shorter term plans or personal plans).  We plan at Lanit – we are just ready for our plans to change at any time.  And, we are careful to scope our plans appropriately – longer term plans are more vague, shorter term plans get more specific.

Vision statements

Vision statements are simple, vague, big picture plans which are useful to guide the smaller picture plans later.  Be careful not to get too detailed so that it’s easy to adapt.

  • Company/Strategic planning – Lanit has a vague mission statement that pertains to everything we do, and helps guide our smaller plans.  Basically, we want to write software that improves people’s lives in a meaningful way.
  • Product planning – Each product sets forth its own broad goals.    For Foliotek, we want to make the accreditation process easy for schools by providing sensible, clean, and easy to use portfolio software.  We focus on making the portfolio process as simple as possible for students and faculty to ease the burden that adding a portfolio culture to a school can create.
  • Release planning – Often times, we have a common goal (or a few goals) that helps us decide on feature sets for each release we do.  Recently, we’ve had releases that focused on making the system more maintainable by adopting some newly available frameworks throughout a project.  An upcoming release will focus on adding more interactivity to the system through the use of more client scripting and ajax.

Releases and Feature evaluation

For existing products at Lanit, we develop in 8 week cycles.   If anything takes longer than that to get from idea to release, then we run the same risks of the waterfall model – we either build something that is to late to market (the market changes between when we plan and when we release).  Or, we waste a bunch of effort because we got the plan wrong to begin with, and spent months developing the wrong thing.  As with all rapid development philosophies – the point is to find out as soon as possible when you make a mistake and change course immediately.  Even inside of the 8 week cycle, you’ll see we allow customers to see and comment on complicated designs sooner.

For existing products – we keep feature wishlists that eventually evolve into planned feature sets for a particular release.  We use FogBugz (http://fogbugz.com) to store the wishlists, and items move to different projects (new ideas -> for development) and releases (undecided release -> Summer 01 2009) as we evaluate the lists.

  1. Keep an ongoing wishlist
    • from customers (help them succeed with how they want to use the product)
    • from business team (help sell the product to new customers)
    • from support team (spot trouble spots, ease support burden)
    • from development team (more maintainable code, or newer and better technologies)
  2. Shortly before starting to develop a release (at most a week, so that you have the best possible information), pull the most valuable ideas into a release wishlist
    • usually, stakeholders from support/business/dev make a ‘top ten’ type list, then combine them to create an initial release list
    • this is also a good time to completely eliminate ideas from the wishlist that are no longer valid
  3. Dev team comes up with very rough estimates to develop ideas
  4. Dev, support, and marketing ranks the wishlist based on cost/benefit type analysis (usually, in a meeting.  also a good time to describe and document the needs of each feature better).  Often, the idea is refined or simplified based on discussions in this meeting.  We always try to build the simplest useful version of a feature possible, and only add complexity after people have tried the simple version and more is still needed.
  5. Narrow down the release to a reasonable list based on available time and estimates
  6. Dev team works on the list in order of priority – Everyone knows that the bottom items may drop into the next release based on changing circumstances and priorities.  This also allows for new items to be injected at the cost of items at the bottom, and allows more time to think about the expensive, less well defined items that should be further down the list.

Designing/Developing Features

The rest of the work is taking the requested feature and taking it to implementation.  This process has the most variability – some features are small and easily understood, a text description is enough to develop it.  Some features are more detailed or important and require more elaborate designs.  The most expensive features to implement should be discussed with customers at early stages to prevent wasted effort.  So, we never mandate that each feature must go through these steps – the dev team is allowed to determine which tasks are appropriate for the item they are working on.

  • Feature descriptions – pretty much every feature idea has at least a sentence in FogBugz describing it.  Typically, links to current screens are included (for “change” type requests) to get everyone on the same page.  Often, the descriptions are detailed during the release feature set prioritization meeting.
  • Paper sketches – if the feature has a small amount of sophistication, it often useful for the developer to do a rough paper sketch for their own benefit.  This could be a UI sketch, a db model, a flow diagram, etc.
  • Informal discussion – sometimes, a brief chat about the feature is all that is necessary.  Face-to-Face can be a double edged sword – they can be very powerful for the person that needs help, and very distracting for the other party.  We use yammer (http://yammer.com) for these kinds of communications so that each person can decide their level of interruptibility (each user can decide to have an IM-like client open, to get email notifcations, to get daily email digests, etc – and can customize those options based on subject).  Many times, we still talk face to face – but we initiate conversations using yammer instead of physically disrupting the other person.
  • Plain ‘ol Whiteboards (POWs) – Sometimes, features are too hard to describe.  Others, the business team only has a need (this is too slow/complicated) but doesn’t have a clue how it should be solved.  In these cases, it’s useful to collaboratively sketch ideas at a whiteboard.
    • POWs can become real, permanent documentation!  We use a few handy tools in combo to make this happen:
      • A digital camera
      • An Eye-Fi (http://www.eye.fi/) wireless SD card – gets pictures to us without the hassle of a card reader
      • EverNote (http://www.evernote.com) – archives whiteboard photos.  Allows easily retrieval through tags, and even can OCR/search some handwritten text in a pic.  Integrates with Eye-Fi – so you get a new note with each pic without any hassle.  Synchs across all popular computers and smartphones.
      • Whiteboard Photo (http://www.polyvision.com/ProductSolutions/WhiteboardPhotoSoftware/tabid/284/Default.aspx) – software package that takes a photo of a whiteboard and cleans it up a ton – picture ends up looking like it was sketched in paint.  Allows copy-paste – so you can click the photo in evernote, ctrl-c, click whiteboard photo, ctrl-v, clean, and repeat in opposite direction.
  • Comps – sometimes, the detail needed is aesthetic.  In those cases, someone is commissioned to a more refined Photoshop or Fireworks comp (often based on a sketch).
  • Paper or digital sketch “prototypes” –  Sometimes, the feature/ui itself is complicated.  In these cases its useful to get feedback from inside the team and from customers before you write a bunch of code.  Most of the time, you can get the info you need by walking the customer through sketches – either by explaining a flipping through a succession of paper sketches, or by building digital sketches in Fireworks – which can be linked together and allow clicking between screens.  This is a good low cost way to get something that feels a lot like a programmed prototype.
  • Coded prototype/betas – When a feature is very interactive, or is highly data driven, etc – sometimes you need something real to evaluate the design.  In those cases we build out the feature as small as possible and release it to a carefully chosen set of customers (or ourselves) for “real” use – and we tweak it before we release to everyone.

Testing and Maintanance

After the dev team believes it is done, the release is pushed to a testing area.  The main contact for each new feature request is responsible for testing it to make sure that it works properly and fills the intended need.  We usually spend about a week going back and forth with the support/sales teams until everyone is satisfied with the release.  Then, it goes out to our customers.

We’re not perfect.  Sometimes, bugs get out to the live environment.  For the highest priority issues, the support team can interrupt new development and get an immediate resolution.

Doing this for every trivial issue would severely hamper new development, so we limit these cases to a small number per year. For all other issues, we have a weekly patch schedule.  Support reports problems (to another area in FogBugz), and we fix throughout the week.  On Mondays, we send the set of fixes all out at once.  To keep the developers sane, we rotate the developer responsible for writing fixes each week.

This schedule allows the dev team to stay focused on making the product better – but also allows the support team to be responsive about issues in the system.  Customers are more accepting of problems when we can tell them when it will be fixed.

“Green Field” development

So far, I’ve focused on how we develop changes and additions for our existing products.  Many of these techniques are also useful for developing brand new products.  Planning new projects can often be more complicated, though, and features aren’t as well understood to begin with.  Many more decisions need to be made.

  • Brainstorming sessions – Early on, the idea is very vague.  The quickest way to narrow it down into a plan is to get people into a room and come up with ideas.  Be sure to involve potential customers.  We’ve been very successful by developing “advisory boards” of people who are in your market – and allowing them to help brainstorm and design the product.  When they are done, not only does your product fit the market better – but you end up with a group of devoted customers right off the bat since they feel some ownership of the product.
  • Multi disciplinary team design sessions – IDEO developed a method where you take a problem, and design separate solutions in several small groups of about three or four.  Then, you come back and evaluate as a group and combine the ideas into one solution.  This can be very useful for developing a feature set for a new product.  For best results, each team should have a tech person, a business person, a support person, etc.
  • User Studies – The best way to get all of the little details right is to sit down with a real user and watch them try to use your new product.  You don’t need expensive equipment – just sit down and watch, and take notes or record with a webcam.  You don’t need a completely functioning system – you can (and should) walk users through paper sketches (what would you click on here – ok, now you see this) and later have them use sketch prototypes (click through – ok that would be added here when its a real system).  If the system is really interactive, build a simple html/js prototype.  You also don’t need scientific population samples –  any 3-5 people you grab (your spouse, neighbors, friends…) will catch all of your really important usability problems.

Accessible Custom AJAX and .NET

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

One general rule in making an accessible web application is that you shouldn’t change content of the page with javascript. This is because screen readers have a tough time monitoring the page and notifying the user of dynamic changes. Usually, the reason you would use Ajax is exactly that – do something on the server, and then update some small part of the page without causing the whole page to refresh. That means if you have an “ajaxy” page and you care about accessibility, you have to provide a non-ajax interface as a fallback for screen readers.

Ajax.NET, Microsoft’s library for Ajax, makes this fallback easy to implement. Microsoft has you define your AJAX properties in an abstraction layer – the page XAML – which means that the framework can decide to not render the AJAX code to certain browsers (and screen readers) that will not support it, and instead use the standard postback method.

The problem with Ajax.NET is that the communication can be bloated (mostly because of the abstraction layer, it sends more post values than you might need – like the encrypted viewstate value), which negates many of the benefits of an Ajax solution. I really wanted to roll my own ajax communications to make them as lightweight as possible.

My solution was to write the page in a standard .NET postback manner, and then use a user-defined property that would allow javascript to replace the postback in javascript/jQuery with an Ajax call.

Here’s my code:

$(function() {
            if (serverVars.uiVersion != "accessible") { // use js/ajax for checking/unchecking where possible
                var $todos = $("#chkToDo");
               $todos.removeAttr("onclick"); // remove postback
                $todos.click(
                    function() {
                        //some stuff altering the document, and an ajax call to report the change to the db
                    }
                );
            }

This works great, although you need to be careful about your server-side events. In my case, I had an OnCheckChanged event to handle the postback/accessible mode. Even though checking/unchecking the box no longer fired an autopostback – ASP.NET will still fire the checkchanged event if you postback later for some other reason (e.g. – a linkbutton elsewhere on the page) after the checked status had changed. So, if a user had changed the state of a checkbox, then clicked another link button on the page – instead of sending the user to the intended page,my app just refreshed the whole page(because my CheckChanged event redirected to reload the page – which caused it to skip the ‘click’ event of the linkbutton). Once I realized this was happening, it was easy enough to fix – I just needed to only run the event logic if the user was in accessibility mode. I spent a little time running in circles on that one though, at first I thought my client side document changes were causing a ViewState validation error on the server.