Be very afraid

April 3, 2004

Everyone should read this article and ponder if this is the world they want. I’m sorry to have been off so long. More shortly, but in the meantime, read this and, if you understand at all why the browser has been such a tremendous force for good in terms of access, support, TCO, productivity, and even quality of design, be very afraid.

I’m getting mail/comments that interpret this as me being against Rich UI or Avalon. Nothing could be further from the truth. I think that, in its place, particularly personal productivity tools, Rich UI is great. What is more, I’m delighted to see Windows building a better model for authoring it. What I’m afraid of isn’t rich UI. I’m afraid of the browser being taken away. Many many customers love the browser precisely because sites tend to be easy to use. It always amazes me how little the people in our industry understand this. It is easy. It is as easy to use a well designed web site as using a radio, Job’s old vision. No GUI application is that easy. Period. They are richer. They are more interactive. They are great for productivity. But they aren’t as easy, or so customers tell me. And it would be a huge shame if we lost that.


REST

December 9, 2003

Obviously I’ve opened a can of worms and I’m getting a ton of helpful advice, pointers on reading, and discussions about who to talk to. The thing is, I have a day job, and it is going to take me a couple of weeks to read all this, talk to the right folks and educate myself. So I’m going to go offline, do that, and then come back and comment on what I’ve learned. As a believer in helping programmers, I think some of the comments about them not needing tools to understand the datagrams going back/forth are hasty, but at the same time I think what is meant is that it is harmful for the programmers not to EXPLICITLY understand what the wire format is. I completely agree with this. SOA relies on people understanding that what matters is the wire format they receive/return, not their code. Automatic proxies in this sense are dangerous and honestly if someone can show me that REST could eliminate the hideous complexity of WSDL and the horror that is SOAP section 5, that alone might make me a convert. Interestingly, if you do believe that programmers should understand the wire format because it is the contract, then one could argue that ASP.NET’s use of Forms and the enormous amount of HTML it hides from programmers is a big mistake. But I’m already deep into one digression so I’ll leave that thought for others. In either case, there is a basic point here that really resonates with me. If there is always a URL to a resource, then they can much more easily be aggregated, reused, assembled, displayed, and managed by others because they are so easy to share.


Learning to REST

December 7, 2003

Answering comments is turning out to be more interesting than trying to explain my ideas which is, I suppose, part of the idea behind Blogs. In any case, I’ve had a slew of comments about REST. I admit to not being a REST expert and the people commenting, Mike Dierkin and Mark Baker in particular, certainly are. They tell me I’m all wet about the REST issues. I’ll dig into it. But I admit, I don’t get it. I need to be educated. I’m talking to our guy, Mark Nottingham, about this who does understand it all, but I also will be at XML 2003 in Philadelphia on Wednesday if people want to explain to me the error of my ways.

Let me explain why I’m confused. I don’t want to depend upon HTTP because I want to be able to use things like IM as a transport, particularly Jabber. So I want some protocol neutral way to move information back and forth. Maybe I’m wrong here, but I’m particularly wary of HTTP because in many cases I want the “sender” of the message to know reliably that the receiver has it. How does REST handle this? I’m going to try and find out. Secondly, whenever I send a set of messages to a set of services and responses start coming back, I want a standard protocol level way to correlate the responses with the sent messages. How do I do this without depending on the HTTP cookie and without the poor programmer having to worry about the specific headers and protocols? Again, I need to learn. Third, I DO believe in being able to describe a site so that programming tools can make it easy/transparent to read/write the requests and responses without the programmer thinking about the plumbing and formats. How is this done in the REST world? What’s the equivalent of WSDL. Sure WSDL is horribly complex and ugly. But what is the alternative here? I don’t mind, to tell the truth, if the description is URL centric rather than operation-centric, but I do want to know that the tooling “knows” how to build the appropriate Request. I care about programming models. Fourth, as I’ve said before, I think that the model for the moblized browser communicating with the internet is frequently going to be a fairly subtle query/context based synchronization model. For example, given the context of a specific trip and some cached details, update me with the latest details. Given the context of a specific project and some cached tasks and team members, update me with the latest tasks, team members, etc for that project. How do I do this in the REST model given that this isn’t trying to change the data on the server and that I need to pass in both the context, the implicit query, and the data I already know? Fifth, how do I “subscribe” to “Events” and then get “unsolicited messages” when someone publishes something I want to know about? KnowNow has been doing this for years, (Hi Rohit), but how do I do this in REST. So, REST folks, feel free to send me mail explaining all this to me. I’m trying to learn.


Modifying Information Offline

November 15, 2003

Changing Data Offline:james@doit.org writes that I should refrain from blogging because my blog “is a real slow” one. Perhaps this is true, but I shall persevere. In this entry, I’m going to discuss how I imagine a mobilized or web services browser handles changes and service requests when it isn’t connected. This is really where the peddle hits the metal. If you just read data and never ever alter it or invoke related services (such as approving an expense report or booking a restaurant) then perhaps you might not need a new browser. Perhaps just caching pages offline would be sufficient if one added some metadata about what to cache. Jean Paoli has pointed out to me that this would be even more likely if rather than authoring your site using HTML, you authored it as XML “pages” laid out by the included XSLT stylesheets used to render it because then you could even use the browser to sort/filter the information offline. A very long time ago when I was still at Microsoft (1997) we built such a demo using XSLT and tricky use of Javascript to let the user do local client side sorting and filtering. But if you start actually trying to update trip reports, approve requests, reserve rooms, buy stocks, and so on, then you have Forms of some sort, running offline, at least some of the time, and code has to handle the inputs to the “Forms” and you have to think through how they are handled.

XAML:First a digression. I promised I’d dig into this a bit more. At the end of the day, I think that thinking of XAML as an industry standard for UI is premature and does assume that Windows will have complete dominance. It is essentially an extremely rich XML grammar for describing UI and user interactions. It subsumes declaratively the types of things VB can do, the flow patterns in HTML or word, and the 2-D and time based layout one sees in Powerpoint or these days in Central and Royale from Macromedia. In short, it is a universal UI canvas, described in XML, targeting Avalon, Longhorn’s new graphics engine. That is the key point. It isn’t an industry standard unless you assume that Avalon’s graphics are pervasive which I think is a stretch. Also, people are talking about this as though it will be here next month. As far as I can determine, Microsoft’s next massive OS effort, Longhorn, will ship somewhere between 2005 and 2006. In short, it is probably 3 years away. 3 years from now my daughter will be grown up and in college and who knows what the world will look like. I have no doubt that Microsoft is salivating at the thought that this will subsume HTML (not to mention Flash and PDF) and thus put those pesky W3C folks out of business, but I can’t really worry about it. Kevin Lynch of Macromedia should be the pundit on this one. End of digression.

Browser Model so far:As discussed already, this new browser I’m imagining doesn’t navigate across pages found on the server addressed by URL’s. It navigates across cached data retrieved from Web Services. It separates the presentation – which consists of an XML document made up of a set of XHTML templates and metadata and signed script – from the content which is XML. You subscribe to a URL which points to the presentation. This causes the XML presentation document to be brought down, the UI to be rendered, and it starts the process of requesting data from the web services. As this data is fetched, it will be cached on the client. This fetching of the data normally will run in the background just as mail and calendar on the Blackberry fetch the latest changes to my mail and calendar in the background. The data the user initially sees will be the cached data. Other more recent or complete information, as it comes in from the Internet, will dynamically “refresh” the running page or, if the page is no longer visible, will refresh the cache. I’m deliberately waving my hands a bit about how the client decides what data to fetch when. I’m giving a keynote talk about this at XML 2003 and I want to save some of my thunder. So far, though, I’ve described a read only model, great for being able to access information warehouses and personal data and like clinical trial history or training materials or find good restaurants in the neighborhood or do project reviews all while offline, but not as good when used for actually updating the clinical trials or entering notes into them or building plans for a team or commenting on the training materials or booking the restaurants.

It’s a fake:It is very important to remember in this model that “reality” usually isn’t on the device, be it a PC or a Blackberry or a Good or a Nokia 6800. Because the information on the device is incomplete and may have been partially thrown out (it is a cache) you don’t know really which tasks are in a project or which patients are in a trial or which materials have been written for a section. You only know which ones you have cached. The world may have changed since then. Your client side data (when running offline) may be incomplete. So, if you modify data, you need to remember that you are modifying data that is potentially out of date.

Don’t change it. Request the change:Accordingly, I recommend that the model is that, in general, data isn’t directly modified. Instead, requests to modify it (or requests for a service) are created. For example, if you want to book a restaurant, create a booking request. If you want to remove a patient from a clinical trial, create a request to do so. If you want to approve an expense report, create a request to approve it. Then relate these requests to the item that they would modify (or create) and show, in some iconographical manner, one of 4 statuses:
1) A request has been made to alter the data but it hasn’t even been sent to the internet.
2) A request has been sent to the Internet, but no reply has come back yet.
3) The request has been approved
4) The request has been denied.

Expense Reports:Let me start with a simple example. While offline, the user sees a list of expense reports to approve. On the plane, he/she digs into them, checks out details, and then marks some for approval and adds a query to others. All these changes show up but with an iconic reminder to the status/query fields that these fields reflect changes not yet sent to the Internet. The user interface doesn’t stall out or block because the Internet isn’t available. It just queues up the requests to go out so that the user can continue working. The user lands and immediately the wireless LAN or GPRS starts talking to the internet. By the time the user is at the rental bus, the requests for approval or further detail have been sent and icons have changed to reflect that the requests have now been sent to the Internet. Some new data has come in with more expense reports to be approved and some explanations. By the time the user gets to his/her hotel, these requests on the Internet have been de-queued and processed invoking the appropriate back-end web services and responses have been queued up. By the time that the user connects in at the hotel or goes down to the Starbucks for coffee and reconnects there (or if the device is using GPRS much sooner) the responses have come in. If the requests are approved, then the icon just goes away since the changed data is now approved. If the requests are denied, then some intelligence will be required on the client, but in the simplest case, the icon shows a denied change request with something like a big red X, (this is what the Blackberry does if can’t send mail for some reason as I learned to my sorrow on Sunday). The user sees this and then looks at the rejection to see why.

Notice that all this requires some intelligence on the part of the web services browser and potentially some intelligence on receipt of the approvals or denials from the internet. In the model I’m imagining, the client side intelligence will be done in script that will be associated either with the user actions (pressing submit after approving or querying) or with the Internet actions (returning approval or rejection). The script will have access to the content and can modify it. For example, on receipt of a rejection, it might roll back the values to their prior ones. Server side intelligence will be handled using your web service server of choice.

Restaurant Reviews and booking:Let’s take a slightly more complicated example. I’m flying into Santa Fe and don’t know the town. Before I leave NYC for Santa Fe, I point at the mobilized URL for my favorite restaurant review and check off a price range and cuisine type (and/or stars) that I care about. By the time I get on the plane and disconnect from wireless or GPRS, the review has fetched all the restaurants and reviews for the restaurants I’ve checked off onto my PC or PDA. On the plane, I browse through this, pick a restaurant, and then, ask to “book it” since the user interface shows that it can be booked. A Booking request is then shown AND the script also modifies my calendar to add a tentative reservation. Both items clearly show that the requests have not yet left my computer. When I land, the requests go through to the Internet and on to the booking web service and to exchange. It turns out that the restaurant has a free table and I get back an approval with reservation number and time. But the service acting as a middle man on the Internet also updated my “real” calendar to reflect this change. Now I need to replace the tentative reservation in my calendar with the real one created in Exchange by the Internet and I might as well delete the booking request since my calendar now clearly shows the reservation. Script handles this automatically and I’m OK and a happy camper. But should I have even modified my local calendar? Probably not since the integration process on the Internet was going to do it anyway and it just makes it hard to synchronize. I should have waited for the change on the calendar to come back to my client.

In practice this tends to work:This all sounds quite tricky, but as someone who has been using a Blackberry for 3 years now, it really isn’t. You get very used to eye-balling your mail to see if it has actually been sent yet or not. You soon wish that the changes you make to your calendar had similar information since you’re never sure that your tireless assistant hasn’t made some changes to your calendar that conflict with your own and you want to know, are there changes approved or not. What it does require is a decision about where changes are made and how the user is made aware of them. If the user is connected, of course, and the web services are fast and the connection is quick, then all this will essentially be transparent. Almost before the user knows it, the changes will have been approved or rejected and so the tentative nature of some of the data will not be clear. In short, this system works better and provides a better user experience when connected at high speeds. Speed will still sell. But the important thing is that it works really well even when the connection is poor because all changes respond immediately by adding requests, thus letting the user continue working, browsing, or inspecting other related data. By turning all requests to alter data into data packets with the request, the user interface can also decide whether to show these overtly (as special outboxes for example or a unified outbox) or just to show them implicitly by showing that the altered data isn’t yet “final” or even not to alter any local data at all until the requests are approved. For example, an approvals system might only have buttons to create approval/denial requests and not enable the user to directly alter the items being approved (invoices, expenses, transfers) at all.


Design a site like this with WordPress.com
Get started