In response to comments

June 3, 2005

Several people have commented on my most recent post arguing that what is really required is more fundamental than Ajax and needs to be XML based. Despite having been part of XML pretty much since the beginning (late 96) I have a very pragmatic point of view about this. Engineers should use the best tool for the job. For example, sometimes the best way to send data from the server to a running page isn’t XML. It is a Javascript fragment that when “eval’ed” on the client turns into a set of js arrays or values which can then be used within the page. This can be faster and easier to program. If so, why not use it? More generally, I actually agree that Ajax is somewhat transitional, as I hinted at the end of my last post. It makes pages richer and more interactive which is a good thing when appropriate (again the right tool for the job), but it doesn’t solve the issue of creating content for mobile devices or many other issues. That being said, this idea that XML is the “answer” arouses my skepticism. I think it can be a useful tool and it can be a religious mistake. For example, in the early days, Xpaths weren’t expressed as they are now, e.g. as expressions. Instead they were expressed as big chunks of XML, a sort of XML infix parse tree for the expression. It was awful and we fixed it. Similarly, XML turns out to be a very cumbersome way to encode procedural logic compared to, say, script. What is really useful about XML is that the parsing/tokenization comes for free and that it can represent a very rich set of data models and it is relatively self-describing and that, at this point, it is standard. So when the problem calls for a tool which requires sharing data between applications or encoding state of an application (e.g. the new Office announcements from Microsoft) it seems reasonable. But when describing the procedural logic within an application or even the expressions, in my experience, it usually is not.

One clear limitation of XML is that there isn’t an easy way to update it. If you already have some XML and want to alter it in some way, there is simply no standard right now for doing this and the DOM code is usally both hideous to write and relatively fragile since there are no guaranteed ID’s on elements or checksums. Another obvious limitation is moving binary data about. One of the XML founders tells a hair-raising story of a company telling him how they plan to move video around by encoding it and including it in the XML itself. Because of all this, I’ve recently been spending a lot more time working with ATOM and RSS 2.0. These are XML, but they are more. They have the idea of sets which means that one can understand how to insert or replace items and ATOM has a protocol for this. It also means that some very low tech ways can be described to get subsets out of them. They support the idea of LastUpdated and ID so that replacing an item within the document can make sense. And they have explicit and very sensible ways to point to binary data which describes where it is, what type it is, and how big it is and then leaves it to the client to use normal ways to subsequently fetch in this data.

Anyway, I don’t mean to argue to strongly with the thoughtful comments on the previous post, but to caution that this is all engineering, not religion, and pragmatism should rule.


Ajax reconsidered

June 1, 2005

I’ve been thinking about why Ajax is taking off these days and creating great excitement when, at the time we originally built it in 1997 (DHTML) and 1997 (the XML over HTTP control) it had almost no take up. In 1997 I spent a month just chatting with customers about DHTML and what they liked/disliked. In general, they were not fans. They saw the web as a two edged sword. One the one hand it offered instant and universal access to all their customers which was an opportunity they couldn’t afford to resist. On the other hand, they were terrified by the support costs of having millions or tens of millions of customers using their software. Accordingly, they wanted applications (aka web sites) that were as simple to figure out how to use as possible. Unlike productivity applications which Microsoft at least flatters itself that its customers use everyday, these were applications (web sites) which might be used only once or at most once a week (except for the brief insanity of day-trading). There is a trade-off between ease of learning and richness of UI. Toolbar icons and right clicks and drag/drop and so on are often great accelerators, but they aren’t necessarily obvious. Filling in fields and clicking on URL’s usually are. The customers, worrying about support for their customers, were emphatically not in favor of rich internet applications. They wanted reach, not rich. So why has this changed? I think that there are three reasons.

First, the applications that are taking off today in Ajax aren’t customer support applications per se. They are more personal applications like mail or maps or schedules which often are used daily. Also people are a lot more familiar with the web and so slowly extending the idiom for things like expand/collapse is a lot less threatening than it was then. Google Maps for example uses panning to move around the map and people seem to love it.

Secondly, the physics didn’t work in 1997. A lot of Ajax applications have a lot of script (often 10 or 20,000 lines) and without broadband, the download of this can be extremely painful. With broadband and standard tricks for compressing the script, it is a breeze. Even if you could download this much script in 1997, it ran too slowly. Javascript wasn’t fast enough to respond in real time to user actions let alone to fetch some related data over HTTP. But Moore’s law has come to its rescue and what was sluggish in 1997 is often lightning quick today.

Finally, in 1997 or even in 1999 there wasn’t a practical way to write these applications to run on all browsers. Today, with work, this is doable. It would be nice if the same code ran identically on Firefox, IE, Opera, and Safari, and in fact, even when it does, it doesn’t run optimally on all of them requiring some custom coding for each one. This isn’t ideal, but it is manageable.

My son (Alex Bosworth) posted a popular post a week ago on the pitfalls of Ajax applications but he left out some of the features still missing from Ajax applications:

First, printing is still hard. The browser has never grown up to enable the page author to easily describe an alternate layout for printing which is a shame. Why isn’t there an “HTML” for printing which can describe rotation, freeze column or row headers, and so on?

Secondly, the browser isn’t a good listener to external events. If you want to build an application, for example, to show you instantly when someone bids or a price changes, it is hard. You can poll, but poll too frequently and the application starts to feel sluggish and it isn’t easy to do this. What you really want is an event driven model where in addition to the events like typing the page can describe events like an XMPP message or a VOIP request or a data-changed post for an ATOM feed.

Third, if you want the application to run offline, you are essentially out of luck. I’ve written about this at length before in this blog and don’t need to repeat what is required in detail. To summarize what I said earlier, a local cache, a smart template model, and a synchronization protocol are required to build applications that run equally well connected and disconnected and the way that the Blackberry works is a role model for all of us here.

In fact, I’ve written about all these outages before (see Evolution in Action), but in the context of the current excitement around Ajax, it seems reasonable to describe not only what is different and making it work, but what is still missing. Obviously, these things are fixable from a technical point of view. This isn’t rocket science. But if only one browser fixes them, it is unlikely to help at this point. We have a sort of deadly embrace. It is hard to predict how this will play out. History has shown that when innovation is stifled, sooner or later some one runs around it who has nothing to lose and changes the rules of the game completely. I’m confident that this will happen here as well. But I honestly don’t know when.

What I predict will drive this change is the advent of truly mobile computing on mobile devices. This is going to force the game to change. It is way too expensive to build solutions for mobile on J2ME and often too poor a customer experience when they are built using WAP (except for super simple things). I think that we’re going to rethink browsing around a model which has pub/sub, events, and caching built in and which doesn’t have the problems with re-layout. More on this in a subsequent post.


When in Rome

April 2, 2005

Warning to the techies. This is really a family entry. I’ve been in Rome with my family on holiday for the last week. As someone whose children are now doing more interesting things than I am (a natural cause of being on the edge of turning 50 I think) time with my family is increasingly important. Rome is a surpassingly lovely city. One of the advantages of holidays is time to think about things other than work and this trip we’ve been primarily thinking about how beautifully made things were, how astonishing the art is. Everywhere you turn there are beautiful palaces, squares, houses, churches, fountains, and statues. Even the statues on the bridges here are magnificent. My daughter just wrote some poems about this on her blog which I, as a proud father, think are spectacular. Feel free to comment as she likes comments.

My son and I have actually stopped our normal give and take on RSS and open source (his blog) and hacking and DRM and instead been talking about baroque versus classic, Gianlorenzo Bernini and Francesco Borromini and Michaelangelo Buonarroti and how amazing it is that they could sustain the energy and vision to complete the Basilica de San Pietro over a 120 year period, ten generations or more. It makes the sustained 1-2 year efforts we sometimes put into software seem so insignificant and the the fact that we sometimes don’t put enough effort into making it really beautiful even more criminal. I return with rededicated desire and hope to work on things that last, that matter, and that are as well designed for humans as they can be.


Tensions on the Web

March 23, 2005

I attended two conferences in the last week, eTech and PC Forum. The contrast between the two was somewhat startling. eTech was hard nosed, edgy, totally clued in, almost dissonant, and really interesting. I learned a great deal about what is cutting edge in the world I’ve been watching, conversations and collaboration on the web. There were also some really exciting announcements such as Amazon’s Open Search model for A9 for which Jeff and Udi are to be commended and an amazing presentation by George Dyson, at least for a historian like me. PC Forum was much more socially conscious, more ponderous, much older, and more like some sort of British club in which those of us who had the luck to have done something right once (or to have just gotten lucky) now got to sit in our club chairs and try to solve the really hard problems of the world such as health and education and how the brain works. There were some really interesting presentations, in particular one by Jeff Hawkins but I didn’t learn as much about the web as I did about world issues in general. Indeed I learned more about the web in the last 20 minutes from Mary Hodder teaching me about Technorati links than I did in the rest combined. There was/is an infatuation at both conferences with folksonomies(tagging) that I’ll discuss more in a moment.

I haven’t posted for quite a while because my last posts caused unfair attacks on Google by twisting the words I’d used in my posts and attributing my posts to Google. I want to be really clear about something. The opinions I express in this Blog are my own. They have nothing to do with Google’s opinions. Google only asks that I not leak information about future products. Period. But despite that, recent blog posts of mine were used to attack Google and this upset me deeply. Much to my surprise, Dare Obasanjo came up to me and told me, after some fairly vitriolic complaining from me to him about this earlier state of affairs, that he wished I’d continue to post. I thought about this over the weekend and decided that to some degree, you have to take your chances in this environment rather than just hide when you don’t like the behavior and that perhaps I was being over sensitive anyway. There are too many interesting things going on right now anyway.

I’ve been complaining about two things on the web for years. Think of the web as the worlds best communication machine. Then the promise should be that anyone can connect to any information or application or anyone else and that any application can connect to anyone or any application or any information. We got anyone to anyone early in the form of email and more recently in the form of IM and of Blogs. IM adds real time communication and presence and Blogs add broadcasting to the world along with a dialog with the world. We got anyone to any application from the esteemed Tim Berners Lee in the form of HTML, HTTP, and URL’s which changed our world. I say applications because there wasn’t any standard way to ask for information. We got, unfortunately, any application talking to anyone (we call this spam). Web services in one form or another are letting applications access other application although, as I’ve said elsewhere, I think that the standards are too prolix and that a lot of the action will come out of REST and RSS.

But we didn’t get two things. We didn’t get a standard way to get information (e.g. a standard query model for sites). And we didn’t get people working together in communities to create and construct things with one interesting exception, message boards/groups. Mail was the interface, not the web and not IM. I’ve been whining about this for about 5 years off and on and even started a company once to try and address this.

With Open Search the lack of standard ways to get information is, for the first time, beginning to change. There is now a simple but de-facto standard way to start querying sites for information. That’s hugely exciting. The current standard is limited, but a great start. And the web is now rapidly becoming the place for people to collaborate. Wiki’s are growing like wildfire. Folksonomies(tagging) are causing people to quickly and in an emergent bottoms up way, come together to build taxonomies that work for them and surprisingly rapidly become stable. Flickr which Yahoo just bought is a great example of this and Del.icio.us by Joshua Schachter pioneered this model and Wikipedia has picked it up. I’ve always been hugely suspicious of top down taxonomies and restrictive ones (e.g. if you’re a book, you’re not a newspaper) and confident that normal people would never bother to classify things according to someone else’s taxonomy. But I think that tagging has broken through that. It is sufficiently KISS (see my early talks on this for why I think this is good) and rewarding (you get attention if you pick popular tags) to have gained amazing momentum. The clever and audacious Dave Sifry of Technorati claims to have found 5MM tagged posts just in the last 2 and a half months (from del.icio.us and from Flickr and from various blogs). As long as we don’t let the ontologists take over and tell us why tags are all wrong, need to be classified into domains, and need to be systematized, this is going to work well albeit, sloppily. What it does is open up ways to find things related to anything interesting you’ve found and navigate not a web of links but a link of tags. At the same time Wikipedia has shown that a model in which content is contributed not just by a few employees, but by self-forming self-managing communities on the web can be amazingly detailed, complete, and robust. so now people are looking at ways in which the same emergent self-forming self-administering models of tagging and Wiki’s and moderation can be used for events (EVDB) and for music and for video and for medical information. It’s all very exciting. It is a true renaissance. I haven’t seen this much true innovation for quite a while. What I particularly like about all this is how human these innovations are. They are sloppy. To me Tags are sloppy practical de-facto ontologies. Wiki’s are sloppy about changes and version editing. It is accepted that we’re trying new things and that sometimes messes will occur. In short, it is unabashedly creative and imprecise. I’ve always believed in the twin values of rationalism and humanism, but humanism has often felt as though it got short shrift in our community. In this world, it’s all about people and belonging and working with others.

In this very triumph, comes the tension and the problems. Every one of these groups has to worry about spam. Wikipedia does occasionally get spammed, but their entries are long-lasting and resilient and usually get fixed. For information that is more time critical and evanescent however, this sort of vandalism can be much more harmful. While the commercial part of this is detestable, it is at least comprehensible. Often though, there are pointless attacks. Much as vicious people constantly invent viruses to destroy the existing web (amusingly now called the old web) and somehow tell themselves that this vandalism is acceptable, so they destroy the tenor of message boards, vandalize Wiki’s, screw up the tags just because they can and generally try to attack, smear, and destroy. The blogger world calls these people Trolls. Message boards end up needing moderation techniques because of these people. Bloggers learn to turn off comments. And Taggers will end up having to use reputation and other techniques to protect something hugely useful but potentially fragile, or to create gated communities like the old fortresses in history built to keep the vandals out. A great deal of the discussion at etech and even at PC Forum was how to keep the vandals from doing too much damage. Sadly, one reaction may be to curtail anonymity because it is so abused and with the loss of anonymity comes the loss of privacy.

Indeed the other concern is privacy. Presence is in the air. The web because of mobile and broadband and IM is becoming real-time. Real time presence changes everything and rapidly leads to thinking about much richer ways of communicating within communities. It highlights some of the, in my opinion, few limitations of the browser as a zero deployment user interface model. But it also risks us losing those last moments of privacy. Lufthansa has announced that it will support internet on planes. I will not fly on them. I need some periods in my life where I am unreachable. Indeed, every year in August, I vanish for a month from the web, turn off email, and deal with the withdrawal and suddenly I relearn how to think and concentrate. In a world where knowledge and thinking is everything, it is ironic that increasing availability had led to decreasing time in which to reflect, ponder, and just let the mind wander and yet these periods tend to be essential to truly thinking hard. If Nokia sold a phone that reported where I was at all times through presence (as some phone vendors actually already do) I wouldn’t buy it. We’re going to have to work out how to support all this in a manner in which the customers can effortlessly and intuitively opt in and out so that, when they want, they can be left alone and vanish from view and can control who can see them when.

It is going to be fascinating and exciting to watch how these tensions play out, namely the rising trend of people working together and collaborating and communicating over the web in increasingly real time ways contending with the human needs for privacy and reflection and with the unfortunate nature of some humans to vandalize rather than to construct.


We all stand on the shoulders of giants

January 1, 2005

Recently I pointed out that databases aren’t evolving ideally for the needs of modern services and customers who which must support change and massive scale without downtime. This post was savaged by an odd alliance; the shrill invective of the Microsoft apparachiks perhaps sensing an opportunity to take the focus away from Ballmer’s remorseless attack on all that is not Microsoft (but most especially on Open Source) and certain Open Source denizens themselves who see fit to attack Google for not “giving back” enough apparently unaware that all software benefits in almost infinite measure from that which comes before. As usual the extremes find common ground in a position that ignores common sense, reason, and civility.

Many years ago, Eric Michelman and Brad Silverberg and Ken Ong and I built a product, Reflex, before Windows, but with a GUI front end for a PC. In order to build it, we had to build fonts and event managers and heap managers and graphical routines and so on. Later Windows/Mac came along and made all this unnecessary, but then we still had to build huge amounts of code in Access for database and indexing and and much more code for rendering within Windows, brushes and XOR and line drawing and manual bitblitting. Still later we built a browser and we helped to build shared relational databases. As these became a central theme in most applications, we realized that it was now possible for anyone to build Reflex with infinitely less work because the database work was done as was the rendering (the browser) and so one could focus purely on the other issues. This is the nature of science, of learning, of education, of engineering and of software. We all benefit from those who came before us. We benefit most when the knowledge is free and generally accessible., but we benefit either way. It would seem that these cacophonous critics, yammering about giving back and sweepingly ignoring the 100’s of billions of times people use and appreciate what Google gives them for free every day from Search to Scholar to Blogger to gMail to Picasa, do not understand this basic fact.

Suggesting new lines of learning and research is no sin. It is how we grow and add value and has been throughout human history. Taking advantage of what has already been learned and taught is equally no sin. It is common sense and to do otherwise is usually a sign of hubris, arrogance, and immaturity.

Giving back is always done through what one is good at, be it making accessible the world’s literature and learning and knowledge online along with tools to search it, create it, and communicate about it, or through making the world’s goods available if that is one’s business. This is how we are all rewarded for casting our bread upon the waters. It is how economies grow and culture flourishes. And the fact that the critics of the earlier post seem to understand none of this suggests a world view so narrow minded as to make one gasp in wonder and horror.


Open Source

December 30, 2004

Michael Rys pointed me to an interesting counterpost to my most recent post which, I think, somewhat unfairly, takes me to task for asking for the open source tooth fairy.

It says that Google essentially has a parasitic relationship with the open source community. As it turns out, Google actually puts quite a lot of time and effort (and occasionally money) into supporting open source efforts.

It suggests that my post speaks for Google in what I ask for. Actually I don’t. Google is dong just fine with respect to storage and indexing. It has built what it needs to support its products. I really was speaking based far more on what I heard from many many large corporate customers and from almost all services.

Lastly, the counter post assumes that saying software is free is the same as saying that you can’t make money from it. In fact, customers are always willing to pay for support and service. Essentially, long ago, Microsoft turned itself largely into an annuity business where the licences companies hold with Microsoft are just that, payments for guaranteed support and ongoing upgrades. Many open source vendors are now doing the same. My point was not that people should never get paid, but rather that it makes more sense to pay for the ongoing support.

Nevertheless, it is a good and entertaining post and I recommend reading it for a different point of view.


Where have all the good databases gone

December 29, 2004

About five years ago I started to notice an odd thing. The products that the database vendors were building had less and less to do with what the customers wanted. This is not just an artifact of talking to enterprise customers while at BEA. Google itself (and I’d bet a lot Yahoo too) have similar needs to the ones Federal Express or Morgan Stanley or Ford or others described, quite eloquently to me. So, what is this growing disconnect?

It is this. Users of databases tend to ask for three very simple things:

1) Dynamic schema so that as the business model/description of goods or services changes and evolves, this evolution can be handled seamlessly in a system running 24 by 7, 365 days a year. This means that Amazon can track new things about new goods without changing the running system. It means that Federal Express can add Federal Express Ground seamlessly to their running tracking system and so on. In short, the database should handle unlimited change.

2) Dynamic partitioning of data across large dynamic numbers of machines. A lot people people track a lot of data these days. It is common to talk to customers tracking 100,000,000 items a day and having to maintain the information online for at least 180 days with 4K or more a pop and that adds (or multiplies) up to a 100 TB or so. Customers tell me that this is best served up to the 1MM users who may want it at any time by partioning the data because, in general, most of this data is highly partionable by customer or product or something. The only issue is that it needs to be dynamic so that as items are added or get “busy” the system dynamically load balances their data across the machines. In short, the database should handle unlimited scale with very low latency. It can do this because the vast majority of queries will be local to a product or a customer or something over which you can partion. It is, obviously, going to come at a cost for complex joins and predicates across entire data sets, but as it turns out, this isn’t that normative for these sorts of data bases and an be slower as long as point 3 below is handled well. And a lot of them can be solved with some giant indices that cover the datasets that are routinely scanned across customers or products.

3) Modern indexing. Google has spoiled the world. Everyone has learned that just typing in a few words should show the relevant results in a couple of hundred milliseconds. Everyone (whether an Amazon user or a customer looking up a check they wrote a month ago or a customer service rep looking up the history for someone calling in to complain) expects this. This indexing, of course, often has to include indexing through the “blobs” stored in the items such as PDF’s and Spreadsheets and Powerpoints. This is actually hard to do across all data, but much of the need is within a partioned data set (e.g. I want to and should only see my checks, not yours or my airbill status not yours) and then it should be trivial.

By the way, the inherent cost of the machines to do all this is relatively negligible. Assume 3 by 400GB cheap disks per machine mounted in racks of 60 and one rack would pretty much do it if there wasn’t a need for redundancy and logs, say two racks to cover that. Companies are already coming out this year with highly redundant disk arrays for $1 per GB or $1200 / machine for the ones above (not counting the $1000 for the machine and memory itself). In short, for 120 such machines, it will probably cost less than $500K and that’s less than 3-4 good programmers and it is one time a capital cost. But the cost to most people I’ve spoken to in terms of actual people to build and administer such systems is an order of magnitude more. For that matter, configure the 120 machines with 4GB each of memory and you could normally keep the current days work in memory and in many of these cases the data accessed will be the current days as people look for their waybills or flight statuses or check their Blog comments or whatever.

Users of databases don’t believe that they are getting any of these three. Salesforce, for example, has a lot of clever technology just to hack around the dynamic schema problem so that 13,000 customers can have 13,000 different views of what a prospect is.

If the database vendors ARE solving these problems, then they aren’t doing a good job of telling the rest of us. The customers I talk to who are using the traditional databases are esentially using them as very dumb row stores and trying very hard to move all the logic and searching out into arrays of machines with in memory caches. Oracle is doing some very clever high end things with streaming queries and the ability to see data as of some point in recent history (and even which updates affected the query within some date range) and with integrated pub/sub and queueing, but even Oracle seems to make systems too static and too ponderous to really meet the needs about and, oh yes, they seem to charge about ten times as much as one would expect for them.

Indeed, in these days of open source, I wonder if the software itself, should cost at all? Open Source solutions would undoubtedly get hacked more quickly to be robust and truly scalable across nice simple software. It wouldn’t be as pointwise fast, but the whole point is that these systems will scale linearly and are so cheap that it doesn’t matter. The advantage of Open Source is that those folks really understand how to build scalable clouds of machines with a default assumption of failure and load balancing. It’s called Apache. There are some other interesting problems that the database vendors are also ignoring but for now (like how do I ask for the set of complaints that are like the ones this customer has) but for now the three above seem like the big ones to me. My message is to the Open Source community that has, so ably, built LAMP (Linux, Apache and Tomcat and MySQL and PHP and PERL and Python). Please finish the job. Do for databases what you did for web servers. Give us dynamism and robustness. Give us systems that scale linearly, are flexible and dynamically reconfigurable and load balanced and easy to use.

Light that LAMP for us please.


Christmas Interlude

December 25, 2004

I grew up with a Jewish father who loudly condemned religion and a Catholic mother who in her quiet way, having suffered through a French Catholic boarding school run by nuns, disliked it even more. We celebrated Christmas with great enthusiasm every year, cutting down our own tree (in Vermont), dragging it home, decorating it with home made decorations, and generally having a wonderful time. What we were celebrating wasn’t especially commercial. We were celebrating family, friendship, peace, and in general man’s ability to find joy and harmony in a difficult world. We viewed the holiday as inclusive, friendly, and festive. We visited all our neighbors on Christmas day (this was a small road in the hills of Vermont called Wheelerville road) and generally had a wonderful time and the memories have reverberated down through the years. My children, born of a marriage between an agnostic dad and a skeptical Jewish mother, have shared the same tradition. I have no doubt that theirs will too. I believe that it is this spirit that made this country a great one, a spirit of hope, of inclusiveness, of the importance of neighbors, friends, family, and fun regardless of beliefs, ethnicity, or anything else.

I bring this up because these have been hard times for these beliefs. We are faced on the one hand with people who in the name of conservatism try their very best to equate morality with religion, equate religion with the right to kill, and then act on these beliefs to detroy people’s lives, their freedom to choose, and their trust in each other. I do not speak only of John Ashcroft and the Christian Right. I speak equally of the Islamic fanatics who seem to stand against everything I believe in and hold dear, namely the triumph of rationalism and humanism. Both sides seem to me to be sides of the same coin, quick to kill, quick to detest and fear and dislike those who would think for themselves and hate those who make their own moral choices and in a hard fought way find their own paths to a moral high ground. Both sides contend that one cannot be moral if one is not religious and both sides claim to be inspired by faith (which is of course unarguable). This is an abdication of man’s personal responsibility to figure out what it right and wrong, to behave with integrity and honor and kindness and justness in equal measure. It is the very antithesis of both humanism and rationalism.

For years I have been a conversative because I have believed that the only role of Government should be to protect the rights of people, not control their outcomes, leaving to ability and chance the right of people to grow and succeed or fail, and the left in the US seemed to have forgotten that. But this year, I had to switch. It was simply too appalling to stand with those who would detroy people’s freedom and detroy people’s lives in the name of their “faith”.

So let me suggest that Christmas be rededicated to a belief and faith in the human spirit, to a belief and a faith in the ability of humans of all beliefs and types to treat each other with dignity and respect and in the need to counter the inherent evil of treating people poorly simply because they do not share your irrational beliefs.


Economics and Sand Castles

December 12, 2004

I have been reading an indignant post about my talk written byJean-Jacques Dubray. He says, and I quote,

“Adam, have you ever considered that HTML and Javascript have almost wiped out software engineering from the face of this earth? Was it desirable to build (web) applications with such sloppy technologies which complexity is adapted to other classes of problems but which adoption has guaranteed hefty revenues to companies like BEA or Microsoft? IT doesn’t matter today, not because like machine tools or electricity everyone can acquire them, IT doesn’t matter today because sloppy technologies prevent companies to build mission critical systems at a reasonable cost and reasonable risk and most customers had to result to adopt SAP or PeopleSoft view of the world in attempt to diminish this cost and risk.”

In the face of such hyperbole, it is always debatable whether to respond or not. In some sense a response legitimizes what it should not. I was going to write a long article about real economics, about supply and demand and about how in a free market, like it or not, people get to choose what they want and what works for them even if IT doesn’t like it (as IT didn’t like spreadsheets). But I don’t need to. Paul Graham’s Hackers and Painters says it all, far better than I could ever hope to. The eponymous chapter alone, Hackers and Painters, is worth the read, but Paul’s intense desire to build a programming language for humans is what makes the book for me (even though I don’t really agree with his solution being a sort of PHP fan myself). I read a fair amount (son of a librarian, it sticks), and of the books I’ve read this year, this one will stick with me long after many others have faded. The book resonates with all the reasonableness and pragmatism and deep understanding of the human condition and of economics that is absent in the quoted paragraph above.

Mr Dubray, if you read nothing else, read Paul’s chapter on the other road ahead. It says what I said in my post in this blog on Evolution in Action, but Paul says it so much more eloquently and completely. It should help you to understand that the problem IT faces isn’t sloppy languages. It is irrelevance. For much (although certainly not all) of the work IT does, IT is like children building sand castles on the beach and watching the tide roll in. That tide is highly customizable web based solutions, Salesforce.com today, perhaps Talaris tomorrow. Ask the average Salesforce.com customer (meaning a salesrep) if he is happier with the solution he has now or the one he had back when IT was building a customer CRM for him. I think the answer will surprise you. Web Services have helped immensely here because it has made possible the integration of these solutions with internal logic for those things IT should still be working on. This is the promise and the future in my opinion.


Well!

November 22, 2004

That speech certainly stirred things up. Jeez, I should speak more often. I learn a lot from the indignant responses. And speaking of indignant responses, not only are there Danny Ayer’s excellent rebuttals, there is a classic from Marc Cantor. I love this response. It is thoughtful, but passionate, indignant, and totally interesting. And yet, I don’t agree with it. If his article is to believed, only through the aegis of RDF can I understand the “micro content” like who authored a talk, what was it about, what was covered. Now it is undoubtedly true, that if I build a ton of RDF that, through the right assertions, it could say who authored the content, what it was about, and so on. Of course, I could also just invent some namespace and add some attributes to do this. (I can see Marc getting red in the face just thinking about the ignorance and stupidity of this remark). But seriously, I know how to add attributes and elements. It is easy. Even I can do it. But I always get confused when I try to even remember the RDF syntax for somehow asserting who the author is. And, apparently, so do others. I’m Ok with the looser less precise intelligence of Google in searching the text to answer these questions. And no Mark, it isn’t because “Google is known as an anti-meta-data sort of place”. I’ve only been there 5 months for goodness sakes. It is because it works pretty well. However, I’m still learning and this argument I think benefits everyone, even if I turn out ot be wrong, because it gets people thinking.

Several people have complained that I was unfair to CSS. I didn’t mean to say that people should never use CSS. I use it in this Blog. It is a good thing. I like CSS. What I did mean is that when just trying to create a table with 2 cells on the left and one on the right, I don’t want to figure out the CSS for that. And, asking around, neither do a lot of other people. Mostly, I was just being amused that people try to be pixel precise in HTML when that wasn’t the original intent.

Still trying to be a 21st century kind of person.


Design a site like this with WordPress.com
Get started