Modifying Information Offline

Changing Data Offline:james@doit.org writes that I should refrain from blogging because my blog “is a real slow” one. Perhaps this is true, but I shall persevere. In this entry, I’m going to discuss how I imagine a mobilized or web services browser handles changes and service requests when it isn’t connected. This is really where the peddle hits the metal. If you just read data and never ever alter it or invoke related services (such as approving an expense report or booking a restaurant) then perhaps you might not need a new browser. Perhaps just caching pages offline would be sufficient if one added some metadata about what to cache. Jean Paoli has pointed out to me that this would be even more likely if rather than authoring your site using HTML, you authored it as XML “pages” laid out by the included XSLT stylesheets used to render it because then you could even use the browser to sort/filter the information offline. A very long time ago when I was still at Microsoft (1997) we built such a demo using XSLT and tricky use of Javascript to let the user do local client side sorting and filtering. But if you start actually trying to update trip reports, approve requests, reserve rooms, buy stocks, and so on, then you have Forms of some sort, running offline, at least some of the time, and code has to handle the inputs to the “Forms” and you have to think through how they are handled.

XAML:First a digression. I promised I’d dig into this a bit more. At the end of the day, I think that thinking of XAML as an industry standard for UI is premature and does assume that Windows will have complete dominance. It is essentially an extremely rich XML grammar for describing UI and user interactions. It subsumes declaratively the types of things VB can do, the flow patterns in HTML or word, and the 2-D and time based layout one sees in Powerpoint or these days in Central and Royale from Macromedia. In short, it is a universal UI canvas, described in XML, targeting Avalon, Longhorn’s new graphics engine. That is the key point. It isn’t an industry standard unless you assume that Avalon’s graphics are pervasive which I think is a stretch. Also, people are talking about this as though it will be here next month. As far as I can determine, Microsoft’s next massive OS effort, Longhorn, will ship somewhere between 2005 and 2006. In short, it is probably 3 years away. 3 years from now my daughter will be grown up and in college and who knows what the world will look like. I have no doubt that Microsoft is salivating at the thought that this will subsume HTML (not to mention Flash and PDF) and thus put those pesky W3C folks out of business, but I can’t really worry about it. Kevin Lynch of Macromedia should be the pundit on this one. End of digression.

Browser Model so far:As discussed already, this new browser I’m imagining doesn’t navigate across pages found on the server addressed by URL’s. It navigates across cached data retrieved from Web Services. It separates the presentation – which consists of an XML document made up of a set of XHTML templates and metadata and signed script – from the content which is XML. You subscribe to a URL which points to the presentation. This causes the XML presentation document to be brought down, the UI to be rendered, and it starts the process of requesting data from the web services. As this data is fetched, it will be cached on the client. This fetching of the data normally will run in the background just as mail and calendar on the Blackberry fetch the latest changes to my mail and calendar in the background. The data the user initially sees will be the cached data. Other more recent or complete information, as it comes in from the Internet, will dynamically “refresh” the running page or, if the page is no longer visible, will refresh the cache. I’m deliberately waving my hands a bit about how the client decides what data to fetch when. I’m giving a keynote talk about this at XML 2003 and I want to save some of my thunder. So far, though, I’ve described a read only model, great for being able to access information warehouses and personal data and like clinical trial history or training materials or find good restaurants in the neighborhood or do project reviews all while offline, but not as good when used for actually updating the clinical trials or entering notes into them or building plans for a team or commenting on the training materials or booking the restaurants.

It’s a fake:It is very important to remember in this model that “reality” usually isn’t on the device, be it a PC or a Blackberry or a Good or a Nokia 6800. Because the information on the device is incomplete and may have been partially thrown out (it is a cache) you don’t know really which tasks are in a project or which patients are in a trial or which materials have been written for a section. You only know which ones you have cached. The world may have changed since then. Your client side data (when running offline) may be incomplete. So, if you modify data, you need to remember that you are modifying data that is potentially out of date.

Don’t change it. Request the change:Accordingly, I recommend that the model is that, in general, data isn’t directly modified. Instead, requests to modify it (or requests for a service) are created. For example, if you want to book a restaurant, create a booking request. If you want to remove a patient from a clinical trial, create a request to do so. If you want to approve an expense report, create a request to approve it. Then relate these requests to the item that they would modify (or create) and show, in some iconographical manner, one of 4 statuses:
1) A request has been made to alter the data but it hasn’t even been sent to the internet.
2) A request has been sent to the Internet, but no reply has come back yet.
3) The request has been approved
4) The request has been denied.

Expense Reports:Let me start with a simple example. While offline, the user sees a list of expense reports to approve. On the plane, he/she digs into them, checks out details, and then marks some for approval and adds a query to others. All these changes show up but with an iconic reminder to the status/query fields that these fields reflect changes not yet sent to the Internet. The user interface doesn’t stall out or block because the Internet isn’t available. It just queues up the requests to go out so that the user can continue working. The user lands and immediately the wireless LAN or GPRS starts talking to the internet. By the time the user is at the rental bus, the requests for approval or further detail have been sent and icons have changed to reflect that the requests have now been sent to the Internet. Some new data has come in with more expense reports to be approved and some explanations. By the time the user gets to his/her hotel, these requests on the Internet have been de-queued and processed invoking the appropriate back-end web services and responses have been queued up. By the time that the user connects in at the hotel or goes down to the Starbucks for coffee and reconnects there (or if the device is using GPRS much sooner) the responses have come in. If the requests are approved, then the icon just goes away since the changed data is now approved. If the requests are denied, then some intelligence will be required on the client, but in the simplest case, the icon shows a denied change request with something like a big red X, (this is what the Blackberry does if can’t send mail for some reason as I learned to my sorrow on Sunday). The user sees this and then looks at the rejection to see why.

Notice that all this requires some intelligence on the part of the web services browser and potentially some intelligence on receipt of the approvals or denials from the internet. In the model I’m imagining, the client side intelligence will be done in script that will be associated either with the user actions (pressing submit after approving or querying) or with the Internet actions (returning approval or rejection). The script will have access to the content and can modify it. For example, on receipt of a rejection, it might roll back the values to their prior ones. Server side intelligence will be handled using your web service server of choice.

Restaurant Reviews and booking:Let’s take a slightly more complicated example. I’m flying into Santa Fe and don’t know the town. Before I leave NYC for Santa Fe, I point at the mobilized URL for my favorite restaurant review and check off a price range and cuisine type (and/or stars) that I care about. By the time I get on the plane and disconnect from wireless or GPRS, the review has fetched all the restaurants and reviews for the restaurants I’ve checked off onto my PC or PDA. On the plane, I browse through this, pick a restaurant, and then, ask to “book it” since the user interface shows that it can be booked. A Booking request is then shown AND the script also modifies my calendar to add a tentative reservation. Both items clearly show that the requests have not yet left my computer. When I land, the requests go through to the Internet and on to the booking web service and to exchange. It turns out that the restaurant has a free table and I get back an approval with reservation number and time. But the service acting as a middle man on the Internet also updated my “real” calendar to reflect this change. Now I need to replace the tentative reservation in my calendar with the real one created in Exchange by the Internet and I might as well delete the booking request since my calendar now clearly shows the reservation. Script handles this automatically and I’m OK and a happy camper. But should I have even modified my local calendar? Probably not since the integration process on the Internet was going to do it anyway and it just makes it hard to synchronize. I should have waited for the change on the calendar to come back to my client.

In practice this tends to work:This all sounds quite tricky, but as someone who has been using a Blackberry for 3 years now, it really isn’t. You get very used to eye-balling your mail to see if it has actually been sent yet or not. You soon wish that the changes you make to your calendar had similar information since you’re never sure that your tireless assistant hasn’t made some changes to your calendar that conflict with your own and you want to know, are there changes approved or not. What it does require is a decision about where changes are made and how the user is made aware of them. If the user is connected, of course, and the web services are fast and the connection is quick, then all this will essentially be transparent. Almost before the user knows it, the changes will have been approved or rejected and so the tentative nature of some of the data will not be clear. In short, this system works better and provides a better user experience when connected at high speeds. Speed will still sell. But the important thing is that it works really well even when the connection is poor because all changes respond immediately by adding requests, thus letting the user continue working, browsing, or inspecting other related data. By turning all requests to alter data into data packets with the request, the user interface can also decide whether to show these overtly (as special outboxes for example or a unified outbox) or just to show them implicitly by showing that the altered data isn’t yet “final” or even not to alter any local data at all until the requests are approved. For example, an approvals system might only have buttons to create approval/denial requests and not enable the user to directly alter the items being approved (invoices, expenses, transfers) at all.

Comments are closed.

%d bloggers like this: