Ajax reconsidered

I’ve been thinking about why Ajax is taking off these days and creating great excitement when, at the time we originally built it in 1997 (DHTML) and 1997 (the XML over HTTP control) it had almost no take up. In 1997 I spent a month just chatting with customers about DHTML and what they liked/disliked. In general, they were not fans. They saw the web as a two edged sword. One the one hand it offered instant and universal access to all their customers which was an opportunity they couldn’t afford to resist. On the other hand, they were terrified by the support costs of having millions or tens of millions of customers using their software. Accordingly, they wanted applications (aka web sites) that were as simple to figure out how to use as possible. Unlike productivity applications which Microsoft at least flatters itself that its customers use everyday, these were applications (web sites) which might be used only once or at most once a week (except for the brief insanity of day-trading). There is a trade-off between ease of learning and richness of UI. Toolbar icons and right clicks and drag/drop and so on are often great accelerators, but they aren’t necessarily obvious. Filling in fields and clicking on URL’s usually are. The customers, worrying about support for their customers, were emphatically not in favor of rich internet applications. They wanted reach, not rich. So why has this changed? I think that there are three reasons.

First, the applications that are taking off today in Ajax aren’t customer support applications per se. They are more personal applications like mail or maps or schedules which often are used daily. Also people are a lot more familiar with the web and so slowly extending the idiom for things like expand/collapse is a lot less threatening than it was then. Google Maps for example uses panning to move around the map and people seem to love it.

Secondly, the physics didn’t work in 1997. A lot of Ajax applications have a lot of script (often 10 or 20,000 lines) and without broadband, the download of this can be extremely painful. With broadband and standard tricks for compressing the script, it is a breeze. Even if you could download this much script in 1997, it ran too slowly. Javascript wasn’t fast enough to respond in real time to user actions let alone to fetch some related data over HTTP. But Moore’s law has come to its rescue and what was sluggish in 1997 is often lightning quick today.

Finally, in 1997 or even in 1999 there wasn’t a practical way to write these applications to run on all browsers. Today, with work, this is doable. It would be nice if the same code ran identically on Firefox, IE, Opera, and Safari, and in fact, even when it does, it doesn’t run optimally on all of them requiring some custom coding for each one. This isn’t ideal, but it is manageable.

My son (Alex Bosworth) posted a popular post a week ago on the pitfalls of Ajax applications but he left out some of the features still missing from Ajax applications:

First, printing is still hard. The browser has never grown up to enable the page author to easily describe an alternate layout for printing which is a shame. Why isn’t there an “HTML” for printing which can describe rotation, freeze column or row headers, and so on?

Secondly, the browser isn’t a good listener to external events. If you want to build an application, for example, to show you instantly when someone bids or a price changes, it is hard. You can poll, but poll too frequently and the application starts to feel sluggish and it isn’t easy to do this. What you really want is an event driven model where in addition to the events like typing the page can describe events like an XMPP message or a VOIP request or a data-changed post for an ATOM feed.

Third, if you want the application to run offline, you are essentially out of luck. I’ve written about this at length before in this blog and don’t need to repeat what is required in detail. To summarize what I said earlier, a local cache, a smart template model, and a synchronization protocol are required to build applications that run equally well connected and disconnected and the way that the Blackberry works is a role model for all of us here.

In fact, I’ve written about all these outages before (see Evolution in Action), but in the context of the current excitement around Ajax, it seems reasonable to describe not only what is different and making it work, but what is still missing. Obviously, these things are fixable from a technical point of view. This isn’t rocket science. But if only one browser fixes them, it is unlikely to help at this point. We have a sort of deadly embrace. It is hard to predict how this will play out. History has shown that when innovation is stifled, sooner or later some one runs around it who has nothing to lose and changes the rules of the game completely. I’m confident that this will happen here as well. But I honestly don’t know when.

What I predict will drive this change is the advent of truly mobile computing on mobile devices. This is going to force the game to change. It is way too expensive to build solutions for mobile on J2ME and often too poor a customer experience when they are built using WAP (except for super simple things). I think that we’re going to rethink browsing around a model which has pub/sub, events, and caching built in and which doesn’t have the problems with re-layout. More on this in a subsequent post.

Comments are closed.