Unexpected benefits of Precompilation of LINQ

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

I once had a manager who told me – I can solve any maintenance problem by adding a layer of abstraction.  I can solve any performance problem by removing a layer of abstraction.

I think LINQ to SQL is a wonderful way to abstract the persistence layer elegant, easy to use, easy to manipulate, and easy to maintain lines of code.  Instead of writing SQL which amounts to “how to retrieve” the data – you manipulate an expression tree that gets closer to specifying “what data I want”.  The upside of this is huge – you can change the expression tree at any level of your code, and let .NET decide how to best write the SQL at the last possible moment – which effectively gits rid of dealing with intermediate results and inefficiently written SQL.  Unfortunately, this abstraction does indeed cause a performance hit – the translation/compilation of the tree to SQL – and it’s probably much bigger than you would think.  See http://peterkellner.net/2009/05/06/linq-to-sql-slow-performance-compilequery-critical/ to see what I mean.  In my analysis (using ANTS Profiler), when using uncompiled LINQ – the performance hit is usually about 80% compilation and only 20% retrieving the data!  Thankfully, .NET does allow you to precompile a LINQ query and save the compilation to use over and over again.

Your natural tendency when hearing those kind of numbers might be to precompile every single LINQ query you write.  There’s a big downside to doing that, though – you lose the ability to manipulate the compiled query in other parts of your code.  Another downside is that the precompilation code itself is fairly ugly and hard to read/maintain.

I’m a big believer in avoiding “premature optimization”.  What happens if you precompile everything, and in a version or two Microsoft resolves the issue and caches compilations for you behind the scenes?  You have written a ton of ugly code that breaks a major benefit of LINQ to SQL and is totally unnecessary.

Instead, I recommend you go after the low hanging fruit first – precompile the most frequently accessed queries in your application and the ones that gain no benefit from manipulating the expression tree.  In the applications I work on – there is a perfect case that fits both of these – the “get” method that returns the LINQ object representation of a single row in the database.  These are hit quite often – and there is absolutely no case where the expression tree is further refined.

The old way it was written:

	public static Item Get(int itemid) {
		return (from i in DataContext.Items
			where i.ItemID == itemid
		select i).First();
	}

The new way with Precompiled LINQ:

	private static Func<ModelDataContext, int, Item>
		GetQuery = CompiledQuery.Compile(
			(ModelDataContext DataContext, int itemid) =>

				(from i in DataContext.Items
					where i.ItemID == itemid
				select i).First()

				);

	public static Item Get(int itemid) {
		return GetQuery.Invoke(DataContext,itemid);
	}

Applying this fairly simple change to the application, I’d estimate we got 80%+ of the benefits of compiled LINQ, at the expense of a few extra lines of code per object/table and absolutely no loss of the tree manipulation.

Advertisements

Adapting a Development Process

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

One of the key reasons to choose a rapid development philosophy over a waterfall development philosophy is the ability to adapt to changing requirements.  Once you decide to go with some form of rapid development – which should you choose?  It seems like there are as many options as there are companies utilizing them.

There is no one correct answer.  My advice is to pick and choose what works best for your development team, your product, your customers, and your company.  In addition to adopting a development philosophy that allows rapid change in requirements of your software – be prepared to rapidly change the process itself.  Our process has changed dramatically over my years at Lanit due to new people, new priorities, new technologies, and new tools.

At Lanit, we are currently using a form of Agile software development.  We have very few rigid processes that the design/development team must follow – instead we have a toolbox from which we grab the appropriate tool from in any situation.  Below I will give an overview of a lot of things we do here.  Hopefully some of the things we do will be useful for you/your team.

Planning

Some agile shops will say that you should avoid all planning.  I don’t think that scales well to complicated problems  (and, I don’t really believe they don’t plan – they just have shorter term plans or personal plans).  We plan at Lanit – we are just ready for our plans to change at any time.  And, we are careful to scope our plans appropriately – longer term plans are more vague, shorter term plans get more specific.

Vision statements

Vision statements are simple, vague, big picture plans which are useful to guide the smaller picture plans later.  Be careful not to get too detailed so that it’s easy to adapt.

  • Company/Strategic planning – Lanit has a vague mission statement that pertains to everything we do, and helps guide our smaller plans.  Basically, we want to write software that improves people’s lives in a meaningful way.
  • Product planning – Each product sets forth its own broad goals.    For Foliotek, we want to make the accreditation process easy for schools by providing sensible, clean, and easy to use portfolio software.  We focus on making the portfolio process as simple as possible for students and faculty to ease the burden that adding a portfolio culture to a school can create.
  • Release planning – Often times, we have a common goal (or a few goals) that helps us decide on feature sets for each release we do.  Recently, we’ve had releases that focused on making the system more maintainable by adopting some newly available frameworks throughout a project.  An upcoming release will focus on adding more interactivity to the system through the use of more client scripting and ajax.

Releases and Feature evaluation

For existing products at Lanit, we develop in 8 week cycles.   If anything takes longer than that to get from idea to release, then we run the same risks of the waterfall model – we either build something that is to late to market (the market changes between when we plan and when we release).  Or, we waste a bunch of effort because we got the plan wrong to begin with, and spent months developing the wrong thing.  As with all rapid development philosophies – the point is to find out as soon as possible when you make a mistake and change course immediately.  Even inside of the 8 week cycle, you’ll see we allow customers to see and comment on complicated designs sooner.

For existing products – we keep feature wishlists that eventually evolve into planned feature sets for a particular release.  We use FogBugz (http://fogbugz.com) to store the wishlists, and items move to different projects (new ideas -> for development) and releases (undecided release -> Summer 01 2009) as we evaluate the lists.

  1. Keep an ongoing wishlist
    • from customers (help them succeed with how they want to use the product)
    • from business team (help sell the product to new customers)
    • from support team (spot trouble spots, ease support burden)
    • from development team (more maintainable code, or newer and better technologies)
  2. Shortly before starting to develop a release (at most a week, so that you have the best possible information), pull the most valuable ideas into a release wishlist
    • usually, stakeholders from support/business/dev make a ‘top ten’ type list, then combine them to create an initial release list
    • this is also a good time to completely eliminate ideas from the wishlist that are no longer valid
  3. Dev team comes up with very rough estimates to develop ideas
  4. Dev, support, and marketing ranks the wishlist based on cost/benefit type analysis (usually, in a meeting.  also a good time to describe and document the needs of each feature better).  Often, the idea is refined or simplified based on discussions in this meeting.  We always try to build the simplest useful version of a feature possible, and only add complexity after people have tried the simple version and more is still needed.
  5. Narrow down the release to a reasonable list based on available time and estimates
  6. Dev team works on the list in order of priority – Everyone knows that the bottom items may drop into the next release based on changing circumstances and priorities.  This also allows for new items to be injected at the cost of items at the bottom, and allows more time to think about the expensive, less well defined items that should be further down the list.

Designing/Developing Features

The rest of the work is taking the requested feature and taking it to implementation.  This process has the most variability – some features are small and easily understood, a text description is enough to develop it.  Some features are more detailed or important and require more elaborate designs.  The most expensive features to implement should be discussed with customers at early stages to prevent wasted effort.  So, we never mandate that each feature must go through these steps – the dev team is allowed to determine which tasks are appropriate for the item they are working on.

  • Feature descriptions – pretty much every feature idea has at least a sentence in FogBugz describing it.  Typically, links to current screens are included (for “change” type requests) to get everyone on the same page.  Often, the descriptions are detailed during the release feature set prioritization meeting.
  • Paper sketches – if the feature has a small amount of sophistication, it often useful for the developer to do a rough paper sketch for their own benefit.  This could be a UI sketch, a db model, a flow diagram, etc.
  • Informal discussion – sometimes, a brief chat about the feature is all that is necessary.  Face-to-Face can be a double edged sword – they can be very powerful for the person that needs help, and very distracting for the other party.  We use yammer (http://yammer.com) for these kinds of communications so that each person can decide their level of interruptibility (each user can decide to have an IM-like client open, to get email notifcations, to get daily email digests, etc – and can customize those options based on subject).  Many times, we still talk face to face – but we initiate conversations using yammer instead of physically disrupting the other person.
  • Plain ‘ol Whiteboards (POWs) – Sometimes, features are too hard to describe.  Others, the business team only has a need (this is too slow/complicated) but doesn’t have a clue how it should be solved.  In these cases, it’s useful to collaboratively sketch ideas at a whiteboard.
    • POWs can become real, permanent documentation!  We use a few handy tools in combo to make this happen:
      • A digital camera
      • An Eye-Fi (http://www.eye.fi/) wireless SD card – gets pictures to us without the hassle of a card reader
      • EverNote (http://www.evernote.com) – archives whiteboard photos.  Allows easily retrieval through tags, and even can OCR/search some handwritten text in a pic.  Integrates with Eye-Fi – so you get a new note with each pic without any hassle.  Synchs across all popular computers and smartphones.
      • Whiteboard Photo (http://www.polyvision.com/ProductSolutions/WhiteboardPhotoSoftware/tabid/284/Default.aspx) – software package that takes a photo of a whiteboard and cleans it up a ton – picture ends up looking like it was sketched in paint.  Allows copy-paste – so you can click the photo in evernote, ctrl-c, click whiteboard photo, ctrl-v, clean, and repeat in opposite direction.
  • Comps – sometimes, the detail needed is aesthetic.  In those cases, someone is commissioned to a more refined Photoshop or Fireworks comp (often based on a sketch).
  • Paper or digital sketch “prototypes” –  Sometimes, the feature/ui itself is complicated.  In these cases its useful to get feedback from inside the team and from customers before you write a bunch of code.  Most of the time, you can get the info you need by walking the customer through sketches – either by explaining a flipping through a succession of paper sketches, or by building digital sketches in Fireworks – which can be linked together and allow clicking between screens.  This is a good low cost way to get something that feels a lot like a programmed prototype.
  • Coded prototype/betas – When a feature is very interactive, or is highly data driven, etc – sometimes you need something real to evaluate the design.  In those cases we build out the feature as small as possible and release it to a carefully chosen set of customers (or ourselves) for “real” use – and we tweak it before we release to everyone.

Testing and Maintanance

After the dev team believes it is done, the release is pushed to a testing area.  The main contact for each new feature request is responsible for testing it to make sure that it works properly and fills the intended need.  We usually spend about a week going back and forth with the support/sales teams until everyone is satisfied with the release.  Then, it goes out to our customers.

We’re not perfect.  Sometimes, bugs get out to the live environment.  For the highest priority issues, the support team can interrupt new development and get an immediate resolution.

Doing this for every trivial issue would severely hamper new development, so we limit these cases to a small number per year. For all other issues, we have a weekly patch schedule.  Support reports problems (to another area in FogBugz), and we fix throughout the week.  On Mondays, we send the set of fixes all out at once.  To keep the developers sane, we rotate the developer responsible for writing fixes each week.

This schedule allows the dev team to stay focused on making the product better – but also allows the support team to be responsive about issues in the system.  Customers are more accepting of problems when we can tell them when it will be fixed.

“Green Field” development

So far, I’ve focused on how we develop changes and additions for our existing products.  Many of these techniques are also useful for developing brand new products.  Planning new projects can often be more complicated, though, and features aren’t as well understood to begin with.  Many more decisions need to be made.

  • Brainstorming sessions – Early on, the idea is very vague.  The quickest way to narrow it down into a plan is to get people into a room and come up with ideas.  Be sure to involve potential customers.  We’ve been very successful by developing “advisory boards” of people who are in your market – and allowing them to help brainstorm and design the product.  When they are done, not only does your product fit the market better – but you end up with a group of devoted customers right off the bat since they feel some ownership of the product.
  • Multi disciplinary team design sessions – IDEO developed a method where you take a problem, and design separate solutions in several small groups of about three or four.  Then, you come back and evaluate as a group and combine the ideas into one solution.  This can be very useful for developing a feature set for a new product.  For best results, each team should have a tech person, a business person, a support person, etc.
  • User Studies – The best way to get all of the little details right is to sit down with a real user and watch them try to use your new product.  You don’t need expensive equipment – just sit down and watch, and take notes or record with a webcam.  You don’t need a completely functioning system – you can (and should) walk users through paper sketches (what would you click on here – ok, now you see this) and later have them use sketch prototypes (click through – ok that would be added here when its a real system).  If the system is really interactive, build a simple html/js prototype.  You also don’t need scientific population samples –  any 3-5 people you grab (your spouse, neighbors, friends…) will catch all of your really important usability problems.

Accessible Custom AJAX and .NET

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

One general rule in making an accessible web application is that you shouldn’t change content of the page with javascript. This is because screen readers have a tough time monitoring the page and notifying the user of dynamic changes. Usually, the reason you would use Ajax is exactly that – do something on the server, and then update some small part of the page without causing the whole page to refresh. That means if you have an “ajaxy” page and you care about accessibility, you have to provide a non-ajax interface as a fallback for screen readers.

Ajax.NET, Microsoft’s library for Ajax, makes this fallback easy to implement. Microsoft has you define your AJAX properties in an abstraction layer – the page XAML – which means that the framework can decide to not render the AJAX code to certain browsers (and screen readers) that will not support it, and instead use the standard postback method.

The problem with Ajax.NET is that the communication can be bloated (mostly because of the abstraction layer, it sends more post values than you might need – like the encrypted viewstate value), which negates many of the benefits of an Ajax solution. I really wanted to roll my own ajax communications to make them as lightweight as possible.

My solution was to write the page in a standard .NET postback manner, and then use a user-defined property that would allow javascript to replace the postback in javascript/jQuery with an Ajax call.

Here’s my code:

$(function() {
            if (serverVars.uiVersion != "accessible") { // use js/ajax for checking/unchecking where possible
                var $todos = $("#chkToDo");
               $todos.removeAttr("onclick"); // remove postback
                $todos.click(
                    function() {
                        //some stuff altering the document, and an ajax call to report the change to the db
                    }
                );
            }

This works great, although you need to be careful about your server-side events. In my case, I had an OnCheckChanged event to handle the postback/accessible mode. Even though checking/unchecking the box no longer fired an autopostback – ASP.NET will still fire the checkchanged event if you postback later for some other reason (e.g. – a linkbutton elsewhere on the page) after the checked status had changed. So, if a user had changed the state of a checkbox, then clicked another link button on the page – instead of sending the user to the intended page,my app just refreshed the whole page(because my CheckChanged event redirected to reload the page – which caused it to skip the ‘click’ event of the linkbutton). Once I realized this was happening, it was easy enough to fix – I just needed to only run the event logic if the user was in accessibility mode. I spent a little time running in circles on that one though, at first I thought my client side document changes were causing a ViewState validation error on the server.

Avoid static variables in ASP.NET

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Occasionally, I like to write static methods on classes. They are useful whenever the method loosely relates to to the class – but it doesn’t involve a specific instance of the class.

A good example of a static method in the .NET library is the int.Parse(string val) method. This method basically reads a string and tries to return an integer representation. The method logically fits with the int Class (I suppose a string could have a ToInt() instance method…but that would be overwhelming as you added ToFloat(), ToDecimal(), To…()) – but when you run it, there’s no reason to have an instance of int already. You run it to create a new instance of int.

Commonly, I follow similar logic in the business layer of my webapps. I write a static Add(param1,….,paramx) method that returns an instance of the class after it is persisted to the database. I could accomplish this other ways, but I think the resulting application code reads better like:

      User.Add(username,password);

Than:

      new User(username,password).Add();

or, worse:

      DataContext.Users.InsertOnSubmit(new User(username,password));
      DataContext.SubmitChanges();

The issue with static methods in your business logic is that you often need a common object to talk to the database through: a SqlConnection, a TableAdapter, A LINQ DataContext, etc. I could certainly create those objects locally inside each static method, but that’s time consuming and hard to maintain. I want instead to define a common property (in the business class or a parent class of it) that lazy-initializes and returns an instance of the object when I need it. The problem is that the property must also be static for a static method to have access to it.

The easiest way to accomplish this is something like:

      private static ModelDataContext dataContext=null;
      protected static ModelDataContext DataContext
      {
            get
            {
                 if(dataContext==null)
                     dataContext = new ModelDataContext();
                 return dataContext;
             }
       }

The tricky thing is that this will probably work in development and testing, until you get some load on your website. Then, you’ll start seeing all kinds of weird issues that may not even point to this code as the problem.

Why is this an issue? It’s all about how static variables are scoped inside a ASP.NET application. Most web programmers think of each page in their application as its own program. You have to manually share data between pages when you need to. So, they incorrectly assume that static variables are somehow tied to a web request or page in their application. This is totally wrong.

Really, your whole website is a single application, which spawns threads to deal with requests, and requests are dealt with by the code on the appropriate page. Think of a page in your webapp as a method inside of one big application, and not an application of its own – a method which is called by the url your visitor requests.

Why does this matter? Because static variables are not tied to any specific instance of any specific class, they must be created in the entire application’s scope. Effectively, ASP.NET static variables are the same as the global variables that all your programming teachers warned you about.

That means that, for the property above, every single request/page/user of your website will reuse the first created instance of DataContext created. That’s bad for several reasons. LINQ DataContexts cache some of the data and changes you make – you can quickly eat up memory if each instance isn’t disposed fairly quickly. TableAdapters hold open SQLConnections for reuse – so if you use enough classes of TableAdapters, you can have enough different static vars to tie up all of your db connections. Because requests can happen simultaneously, you can also end up with lots of locking/competing accesses to the variable. Etc.

What should you do about it? In my case, I take advantage of static properties that reference collections that are scoped to some appropriately short-lived object for my storage. For instance, System.Web.HttpContext.Current.Items:

      protected static ModelDataContext DataContext
      {
            get
            {
                 if(System.Web.HttpContext.Current.Items["ModelDataContext"]==null)
                     System.Web.HttpContext.Current.Items["ModelDataContext"] = new ModelDataContext();
                 return (ModelDataContext)System.Web.HttpContext.Current.Items["ModelDataContext"];
             }
       }

In this case, each instance of DataContext will automatically be disposed for each hit on the site – and DataContexts will never be shared between two users accessing the site simultaneously. You could also use collections like ViewState, Session, Cache, etc (and I’ve tried several of these). For my purposes, the HttpContext.Items collection scopes my objects for exactly where I want them to be accessible and exactly how long I want them to be alive.

Some Good Uses for the ASP.NET .ASHX/”Generic Handler”

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

I’ve been developing ASP.NET applications since 2002.  Up until recently, I’d overlooked a pretty useful part of the ASP.NET Framework : Generic Handlers.

Generic Handlers are basically HTTP Handlers that can process requests to exactly one url (Http Handlers are classes that need to be registered in the web.config for what urls they run on, and often manipulate requests/responses that go to/come from .aspx pages). They are “closer to the wire” than .ASPX WebForms or .ASMX WebServices.  They are built with only two requirements, implement:

public void ProcessRequest(HttpContext context);

(this contains the logic for when this page is requested) and, implement:

public bool IsReusable;

(this tells ASP.NET whether it can use one instance to serve multiple requests).

Generic Handlers are similar to web forms – they are reachable by a standard get/post to the url they are located in.  ASP.NET starts you off with context object (passed into the ProcessRequest() function you implement), which you then use to communicate with the request (form and query vals, etc) and the response object (to write info to the browser).  You can decide in what capacity you want that context to have access to the session:read/write, readonly, or none. Unlike web forms, generic handlers skip some of the niceties: the designer/xaml/servercontrol model, themes, viewstate, etc. Thus, generic handlers can be much more efficient for many tasks that make no use of those features.

Here’s a few common tasks where a generic handler would be more suitable than a web form:

  • Writing out the contents of files stored in the db/elsewhere for download
  • Writing out manipulated images, generated graphics, etc
  • Writing out generated PDFs/CSVs etc
  • Writing out data without the webservice overhead
    • xml for other systems or sites
    • html for simple AJAX get requests to replace page content
    • JSON for more advanced ajax data communication
  • Callback urls for data sent from other sites through a http post
  • Building a “REST API”
  • A url that outputs status info about the website, for use by a content switch or other monitoring system
  • Writing out dynamic JS/CSS based on server variables, browser capabilities, etc

Basically, anywhere I found myself before creating an ASPX page, but clearing everything on the .ASPX and using Response.Write()/Response.BinaryWrite() to talk to the browser or another system from the code behind, I should have been using a .ASHX handler instead and saving my webserver some work.

Here is a sample GenericHandler to get you started. It writes out a simple js file that gives you some useful globals you can use in other js files. Its meant to just be an example of the types of things you can do with generic handlers, and doesn’t necessarily produce what you might want in your own dynamic JS file.


    public class SiteJSGlobals : System.Web.IHttpHandler, System.Web.SessionState.IReadOnlySessionState
    {
        public void ProcessRequest(HttpContext context)
        {
            context.Response.ContentType = "text/javascript";

            context.Response.Write("var _g_username='" + context.User.Identity.Name + "';\n");
            context.Response.Write("var _g_usertype='" + context.Session["UserType"] + "';\n");
            context.Response.Write("var _g_browserheight=" + context.Request.Browser.ScreenPixelsHeight + ";\n");
            context.Response.Write("var _g_browserwidth=" + context.Request.Browser.ScreenPixelsWidth + ";\n");
            context.Response.Write("var _g_queryparam='" + context.Request["queryparam"] + "';\n");
            context.Response.Write("var _g_developmentmode=" + context.Request.Url.Host.ToLower().StartsWith("localhost") + ";\n");
            context.Response.Write("var _g_approoturl='" + context.Request.ApplicationPath + "';\n");
        }
        public bool IsReusable
        {
            get
            {
                return false;
            }
        }
    }

Google Wave and Higher Education

If you haven’t seen Google’s IO presentation on its newly unveiled project, you should: watch the Google Wave preview.

Basically, it’s a real time collaborative editing and communication tool. Imagine a wiki, email, a message board, and instant messaging all rolled into one seamless product, and you still won’t quite realize how cool it is going to be.

So, a wave is a document. A document that has more than one editor. A document that is updated in real time on every contributor’s screen character-for-character. A document that allows inline edits for message context, but still keeps track of who did what for accountability. A document that that shows exactly how it got to where it is with a sweet playback tool.

This seems like the solution for a Higher-Education group project. It could facilitate group meetings/communications. Groups could even produce the project/documentation itself collaboratively – even with inline conversations about individual edits (you can later create a new wave without the conversations). If instructors required access to the project wave, they could have extremely high visibility as to who contributed what to a group project for grading.

It also seems like a wave would be great for a classroom session. Instructors could present their lessons in-wave. Students could ask questions in-line, teachers could answer them. The whole classroom conversation could be replayed for anyone who missed class. Instructors could even replay the wave to determine class participation, or easily take a roll call. It would be great for a brick-and-mortar lesson – and it would be revolutionary for online course systems.

Once I calmed down a bit after watching the presentation, I realised that there are three major issues to deal with when using waves in higher education:

  1. FERPA Compliance (protecting access to student work)
    • Documents/correspondence can end up on outside (Google) servers
    • Students would need notification
  2. Section 508 Compliance /Accessibility
    • Live edit mode in a browser doesn’t work for the blind’s screen readers and other accessibility technologies
  3. Browser Compatibility
    • HTML5 is not widely supported – only betas/nightlies of Safari, Chrome, Firefox. No IE support – not even for a formally planned version

Thankfully, Google saw fit to make Wave highly extensible. The protocol is completely open (so you could build your own client) – they are even going to open source “the lion’s share” of their client. This allows solutions to those three issues:

  1. You can host your own wave server – student work can reside on the same servers as your courseware/etc
  2. Since the wave protocol is open, you (or the developer community) could build a desktop wave client that could speak out live document changes or provide other assistance for impaired users.
  3. With the Wave extension api, you could build a robot that could send wave changes to a html4 document – would would be viewable in any browser. You could also create a html4 web form interface to add content to a wave through a bot. Not only would this solution work for older browsers, it would also solve 2) – screen readers can handle the well written, static html4 document this robot would produce.

I’m extremely excited about Waves and how they can work for education. Hopefully Google accepts our developer application so I can get my hands on it soon.

Finding real body height using jQuery

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Sometimes, you need to know the full content of a document’s height. In our case, we needed to set an iframe to be the full height of its content.  There are several ways to attempt this:

document.body.clientHeight;
document.body.offsetHeight;
document.body.scrollHeight;

I’ve found that all of these require significant tweaking to get accurate results across all browsers. Each property means something different in each browser. That sucks for coding and maintenance. Enter jQuery:

$("body").height();

Unfortunately, even that doesn’t seem to work very well in IE.

Here’s a workaround that should always work, though. Basically, you create a temporary div that contains everything from the body, insert it off screen, and measure it:

function getDocumentHeight()
{
   if($.browser.msie)
   {
       var $temp = $("<div>")
             .css("position","absolute")
             .css("left","-10000px")
             .append($("body").html());

       $("body").append($temp);
        var h = $temp.height();
        $temp.remove();
        return h;
    }
    return $("body").height();
}

It seems like it would be really inefficient, but it actually performs fairly well.