Warning. This is a rare nerdy technical post more for. It is about Healthcare XML standards.
I’ve was kindly asked to testify at a meeting in DC this week about standards at an hour when I’m normally not awake. But despite a deep aversion to not getting enough sleep, I was up and on the phone. What made me do such a thing? Well, the discussion was about what actually will work in terms of making health data liquid. What standards should be used for the integration of such data?
Somewhat to my surprise and usually to my pain, I’ve been involved in several successful standards. One was used to exchange data between databases and consumer applications like spreadsheets and Access. It was called ODBC and worked surprisingly well after some initial hiccups. Another was the standard for what today is called AJAX, namely building complex interactive web pages like gmail’s. Perhaps most importantly there was XML. These are the successes. There were also some failures. One that stands in my memory is one called OLE DB which was an attempt to supplant/replace ODBC. One that comes close to being a failure was/is the XML Schema specification. From all these efforts, there were a few lessons learned and it is these that I shared with DC this Thursday. What are they?
- Keep the standard as simple and stupid as possible. The odds of failure are at least the square of the degrees of complexity of the standard. It may also be the square of the size of the committee writing the standard. Successful standards are generally simple and focused and easy to read. In the health care world, this means just focus first on that data which can be encoded unambiguously such as demographics, test results, medicines. Don’t focus on all types of health data for all types of health. Don’t focus on how to know if your partner should have access to what (see points 2,3, and 4 below).
- The data being exchanged should be human readable and easy to understand. Standards are adopted by engineers building code to implement them. They can only build if they can easily understand the standard (see above) and easily test it. This is why, in the last 15 years, text standards like HTTP, HTML, XML, and so on have won. The developers can open any edit editor, look at the data being sent/received, and see if it looks right. When Tim Berners Lee first did this on the internet, most of the “serious” networking people out there thought using text for HTTP was crazy. But it worked incredibly well. Obviously this worked well for XML too. This has implications. It isn’t enough to just say XML. The average engineer (who has to implement these standards) should be able to eyeball the format and understand it. When you see XML grammars that only a computer can understand, they tend not to get widespread adoption. There are several so-called XML grammars that layer an abstract knowledge model on top of XML like RDF and in my experience, they are much harder to read/understand and they don’t get used much. In my opinion Hl7 suffers from this.
- Standards work best when they are focused. Don’t build an 18 wheeler to drive a city block. Standards often fail because committees with very different complex goals come together without actual working implementations to sanity check both the complexity (see point 1 above) and the intelligibility (see point 2 above). Part of the genius of the web was that Tim Berners-Lee correctly separated the protocol (HTTP) from the stuff the browser should display (HTML). It is like separating an envelope from the letter inside. It is basic. And necessary. Standards which include levels or layers all jammed into one big thing tend to fail because the poor engineers have to understand everything when all they need to understand is one thing. So they boycott it. In health care, this means don’t include in one standard how to encode health data and how to decide who gets it and how to manage security. If all I, as an engineer, want is to put together a list of medicines about a patient and send that to someone who needs it, then that’s all I should have to do. The resulting XML should look like a list of medicines to the me. Then, if it doesn’t work, I can get on the phone with my opposite number and usually figure out in 5 minutes what’s wrong. Also I can usually author this in a day or two because I don’t have to read/learn/understand a spec like a telephone book. I don’t have to have to understand the “abstract data model”. The heart of the initial XML spec was tiny. Intentionally so. I heard someone say indignantly about the push to simplify Health IT standards that we should be “raising the bar on standards” not lowering them. This is like arguing that we should insist that kids learn to drive an airplane to walk to the next door neighbor’s house. All successful standards are as simple as possible, not as hard as possible.
- Standards should have precise encodings. ODBC was precise about data types. Basic XML is a tiny standard except for the precise encodings about the characters of the text, Unicode. That is most of the spec, properly so, because it ensures that the encodings are precise. In health care this means that the standard should be precise about the encodings for medicines, test results, demographics, and conditions and make sure that the encodings can be used legally and without royalties by all parties. The government could play a role here by requiring NPI’s for all doctor related activities, SNOMED CT for all conditions, LOINC for all labs, and some encoding for all medicines (be it NDC, rxNorm, or FDB) and guaranteeing that use of these encodings is free for all use.
- Always have real implementations that are actually being used as part of design of any standard. It is hard to know whether something actually works or can be engineered in a practical sense until you actually do it. ODBC for example was built by many of us actually building it as we went along. In the health care world, a lot of us have built and used CCR as we go, learning what works and what doesn’t very practically and that has made it a good easy to use standard for bundling health data. And the real implementations should be supportable by a single engineer in a few weeks.
- Put in hysteresis for the unexpected. This is something that the net formats do particularly well. If there is something in HTTP that the receiver doesn’t understand it ignores it. It doesn’t break. If there is something in HTML that the browser doesn’t understand, it ignores it. It doesn’t break. See Postel’s law. Assume the unexpected. False precision is the graveyard of successful standards. XML Schema did very badly in this regard. Again, CCR does fairly well here.
- Make the spec itself free, public on the web, and include lots of simple examples on the web site. Engineers are just humans. They learn best by example and if the standard adheres to the points above, then the examples will be clear and obvious. Usually you can tell if a standard is going to work if you go to a web site by the group and there is a clear definition and there are clear examples of the standard that anyone can understand. When you go to the HL7 site the generality and abstraction and complexity are totally daunting to the average joe. It certainly confuses me. And make no mistakes. Engineers are average joes with tight time deadlines. They are mostly not PhD’s.
Let’s be honest, a lot of standards are written for purposes other than promoting interoperability. Some exist to protect legacy advantages or to create an opportunity to profit from proprietary intellectual property. Others seem to take on a life of their own and seem to exist solely to justify the continued existence of the standards body itself or to create an opportunity for the authors to collect on juicy consultant fees explaining how the standard is meant to work to the poor saps who have to implement it. I think we can agree that, whatever they are, those are usually not good standards. Health data interoperability is far too important an issue to let fall victim to such an approach.
excellent post, clearly written… must hit my students w this
[…] https://adambosworth.net/2009/10/29/talking-to-dc/ […]
[…] https://adambosworth.net/2009/10/29/talking-to-dc/ […]
OpenEHR seems to me your needs Adam why ignore it?
To be honest, I’m not ignoring anything. Keas isn’t an EHR so is somewhat removed from this but if OpenEHR has super easy XML way to transport data along with clear unambigous encodings of the labs, meds, conditions, and demographics in the XML, wonderful. But I will say this. I just traversed to the link you give searching for simple XML examples of how to do this. I didn’t see them. I did see an incredibly complex XML schema (see my comments on XML schema) and a lot about semantic models which, as I said in my post, have frequently rendered the actual engineering harder. I tried to traverse to the xml document they linked to “http://www.openehr.org/releases/architecture/computable/terminology/terminology.xml” but it was an invalid link. I’m not saying that OpenEHR isn’t “the solution”. But link doesn’t make it obvious.
OpenEHR has a lot to offer- I quite like the concept of Archetypes, in particular- but simplicity of implementation is not one of its many virtues.
For the moment, that is a fair enough response but in terms of the ‘super-easy’ XML way to transport data, it has been implemented industrially and only has to be moved into openEHR. It is called the ‘Template Data Schema’ (TDS) approach, and generates an XSD per message, which is defined using an openEHR template. All messages in openEHR are defined as schemas generate this way (i.e. from underlying templates and archetypes), rather than being hand-built.
If anyone wants information on this, feel free to mail email@example.com
– Thomas Beale
Excellent post, and I agree *fully* with all your recommendations. You’ve definitely hit all of the points re: interop standards that I’ve been telling colleagues for years. BUT. In your first point, you make the (absolutely correct, IMHO) assertion that any effort like this should start with things that can be unambiguously represented, like “demographics, test results, medicines”. The problem, though, is that I’d argue that those three things are way harder to represent unambiguously than they may seem, especially given our current medical vocabulary/ontology options.
Things like LOINC, RxNORM, SNOMED-CT, etc. are way harder to use *correctly* than most engineers without experience working with medical data think. (Note that I’m not referring here to the author of this post, but rather to the legions of engineers who will ultimately be trying to implement and use healthcare interop standards). Maybe a better way of saying that would be to say that many existing medical vocabularies (my favorite example is NDC codes) were not designed for interop, and when you try and use them for interop, they sometimes fall short and when they do it’s inevitably in “interesting” (read “surprising and painful”) ways, and in ways that are not obvious to engineers without some background in medical informatics.
Some of this has to do with implementation details (LOINC’s screwy flat data model, or SNOMED-CT’s post-coordinated nature), and other times it’s because of the fundamental nature of a knowledge source. RxNORM, for example, has a data model set up in a way that makes it easy use it to express concepts about manufacturing and packaging of drugs- but if one was trying to use it to, say talk about what had been prescribed for a particular patient, or what a patient was allergic to, it would not be so well-suited. I’m not saying it’s impossible, I’m just saying that it’s way harder than simply saying “vocabulary/ontology X talks about drugs, let’s just use that for medications” and being done with it. And, in my experience, this subtlety is the biggest obstacle facing non-clinician/informaticians who have to get into medical interop (i.e., your garden variety engineer who might be trying to add support for this stuff to her application).
All I’m really saying here is that, while I agree wholeheartedly that keeping standards as simple as possible is absolutely vital, there’s a certain amount of complexity inherent to the clinical interoperability problem that just isn’t going to go away- or, if it does go away, will take a lot of the value of interop with it. Part of this is due to inherent ambiguity to certain clinical processes, and part of it is due to lack of appropriate vocabularies/ontologies (i.e., vocabs that are designed from the ground up for clinical interop as opposed to billing, generic knowledge representation, etc. etc.).
Given that we have imperfect knowledge sources, what can be done? One way would be to make sure that any spec that relied on external knowledge sources (SNOMED-CT, etc.) spelled out clearly and precisely how to handle ambiguity issues related to each particular source. So, for example, anything using SNOMED would have to specify exactly how post-coordination should be handled (if it should be handled at all), etc. Anything using LOINC would have to specify how to handle what I think of as “out of gamut” lab tests- if an incoming message tells me about some past test results using a code for, say, a particular sub-type of blood gas test that my clinic’s lab doesn’t perform (and therefore our EHR doesn’t have an internal code for it), can I replace it with a less specific but equivalent blood gas test code? If I do that, do I need to preserve the original code? When I send out a message about this patient and their tests, do I need to provide the original code, or can I use the one that my system actually used internally? Not that this is a particularly challenging example, but you see my point. If the spec didn’t specify what to do in this sort of situation, Clinic A might decide to do things one way, Clinic B might decide another, and *boom*, now there’s all kinds of uncertainty about what actually happened to the patient. Not that this is an insoluble problem or anything, but still. It’s just one of many possible examples here; I’m sure you all can think of many more.
Of course, I’m well aware that this line of thinking runs somewhat counter to the general theme of simplicity, but I’m reminded of Albert Einstein’s notion of making everything as simple as possible, but no simpler. I worry that the “no simpler” bar for healthcare interop is higher than we all want it to be, that’s all. It’s most certainly higher than most engineers *think* it is, that’s for darn sure. I know that was my biggest surprise when I started getting into this stuff…
Absolutely true. Having walked into this landmine myself several years ago when starting Google Health, it never ceases to amaze me. But I think just having known encodings is still a lot better than not. And usually the 80/20 rule applies. 20% of the codes are more than 80% of the traffic (we certainly find this at Keas with regard to labs) and these are the best known/understood.
Quite right. Have you read any of Marc Overhage’s papers on their RHIO’s ELR experience? They’ve gone to some pretty incredible lengths to parse ELR submissions to the Indianapolis RHIO for reportable condition surveillance, and have found more or less the same thing w.r.t. 80/20. They’ve also run into some pretty fascinating ways that people manage to screw up HL7…
Overhage et al. A Comparison of the Completeness and Timeliness of Automated Electronic Laboratory Reporting and Spontaneous Reporting of Notifiable Conditions. Am J Public Health (2008) vol. 98 (2) pp. 344-350
Overhage et al. Electronic laboratory reporting: barriers, solutions and findings. Journal of public health management and practice : JPHMP (2001) vol. 7 (6) pp. 60-6
One of the important concepts which seems to be missing from this discussion is the fact that healthcare is orders of magnitude more complex than other areas which have seen successful automation and integration. For example: the 80:20 rule and medical vocabulary. My personal installation of the UMLS (forgetting the errors in mapping SNOMED and LOINC in in it) contains 1,423,965 distinct concepts. The “useful” 20% would still be a _data dictionary_ with 284,793 entries. SNOMED has both post-coordination refinement specification and a new concept specification (same syntax, very different concepts). LOINC isn’t flat, it has 6 dimensions used for most concepts. RxNorm has very different uses from the other drug terminologies we should use (Please, I beg of you, NOT NDC!) such as NDFRT. The other terminologies we also need are ICD-10 (next generation billing classification system–not for clinical use, but OK for billing), ICD-O-3 for cancer, GO, OMIM and HUGO (genomic medicine), ICD-10-PCS (procedure codes), and a handful of subsets of NCI thesaurus (e.g. such as the FDA uses in the Structured Product Listing XML document).
The complexity of the domain cannot be simplified beyond a certain point (q.v. Einstein’s observation that everything should be as simple as possible but no more).
That aside, there are systematic problems in health it, largely in the US stemming from a lack of a health care system in the US, and a total reliance on volunteer efforts for organizations like HL7 (as a disclaimer I am a co-chair of a HL7 WG). There is a lot of work to be done, and the barrier to entry is high.
The objective of computable semantics between systems (either within or between enterprises) is obtainable, but we need more resources to built the specification / metadata / standards infrastructure.
Right now there is an effort for OpenEHR, ISO, CEN, IHTSDO and HL7 to collaborate on creation of common formalisms about how we say things like “this patient has a low pitched grade II/VI holosystolic murmur at their left sterneral border which vanishes with squatting” or ” patient’s mother said that his brother has M5 AML in remission and that the patient is being considered as a BMT donor for any future recurrence”. Trying to express these in a computable unambiguous fashion is just hard. This joint effort to create detailed clinical models (DCMs) will go a long way to bridge the semantic gap between specifications and provide a set of deterministic and shared content.
See my comments elsewhere and as I pointed out elsewhere I have actually built and deployed both health interoperability standards (I started/ran Google Health) and some other concrete industry grammars. Every industry has complexities which can derail a standard and in each case the best solution in my personal experience has been to start with something focused, simple, and easy knowing that it will not solve all problems at all but will some some.
AND, on a completely unrelated note, while I’m my soapbox- something else just came to mind that your post didn’t address but maybe should have: i18n/l10n. I’ve come to believe that any standardization effort that doesn’t think at least a little bit about i18n in the beginning is going to either run into or cause major headaches down the line…
Good post. With respect to e-health standards, you may find my recent posts at http://wolandscat.net/category/health-informatics/ of some interest – essentially on the problem of why standardisation in e-health is so bad.
As I am involved in e-health in general, and in openEHR, my own conclusion after 15 years is that there is no escape from a semantic model-based approach for health (for any industry really). Simple point-point standards can be engineered, like CCR, but they have no flexibility or re-usability, they just solve one problem. There are 1,000,000 problems to solve in health, generally similar, but not the same. The conclusion we came to is that point-to-point XSDs and the like should be seen as ‘concrete standards’ and generated out of a knowledge-engineering environment. The result can then still be concrete specifications with the features you mention above – in other words, we need to ensure the ‘usability’ of the standard, while ensuring that all such standards in a given domain space are based on a common underlying semantic library. This is all working in openEHR – of course, development is still ongoing, but commercial solutions already exist based on this approach, so we know it works.
To give you a feel for some of the semantic models (known as archetypes) – see http://www.openehr.org/knowledge/ (login, browse with explorer on the left, double-click to get a look at some models). You will find a lot of stuff on the wiki to do with terminology and many other topics in openEHR – http://www.openehr.org/wiki/dashboard.action
I can’t agree too strongly with the post by Stephen Bedrick above (while also agreeing with your own post, which our experience also agrees with): there is an inescapable level of complexity in some domains, and dealing with it properly demands new engineering paradigms. Creating a semantic-free solution (i.e. just a whole lot of XML message definitions is not an option). You might wonder why IETF can make standards without the same approach, but I would argue that the kind of ‘semantics’ in information-rich domains like health is vastly different from infrastructure standards (essentially content-agnostic).
In the end I would say that your post is an excellent QA check for ‘standards’ that should be generatable from a knowledge engineering environment: in other words, if we can’t make standards artefacts that pass your test, we are doing something wrong. It just happens that our way of doing this is a bit smarter than simply making each of those standards artefacts as a standalone.
So, I think that even if this is true, it should be separated out from the core capabilities. When I was at Microsoft, there was a concerted effort to develop a consistent semantic model just for some very basic things like appointments, contacts, and so on by some very talented people. Let me clear. This was just an attempt to get agreement on a semantic model within Microsoft. Not across the world/industry. It failed totally at the time. It was just too hard to get semantic agreement between the 4 or 5 groups involved within the same company. Did I mention that one of their current CTO’s, David Vaskevitch, was strongly pushing this effort? It still failed. Getting agreement on semantics seems to be really hard. And even if you did succeed then some of the issues I’ve pointed out before come to the fore about the legibility of the resulting XML. Why not instead have a web service that returns the ontology for a given element/encoding if you want it? Then the message itself isn’t complex.
I like the web-service-to-lookup-ontology idea quite a lot. It could really help keep messages simple, which I think is an absolute necessity. However, I do question how often it’d actually get used in practice. I worry that actual real-world implementors would, instead of calling out to the web service to figure out what to do with a particular element, just fall back on the “treat it as a chunk of free text and somebody will sort it all out later” approach that is all too prevalent in production systems today.
This (entirely understandable and pragmatic) approach would indeed get data moved from System A to System B, but could easily result in System B’s data being useless for decision support and outcomes research- two important uses for medical information that, sadly, often are thought of as secondary priorities (if at all) when talking about interoperability. Oh, there’s plenty of *talk* about supporting that sort of stuff, but when it gets down to brass tacks, too many standards let implementers off the hook too easily.
Unless and until our field’s definition of “interoperability” expands to explicitly include supporting downstream use of the data that we’re moving between systems, we’re just going to find ourselves dealing with the same problems over and over.
The challenge, of course, is how to build this level of support into interop standards without descending into (for example) HL7v3-style hell and ending up with standards that are so complex that no actual human can usefully work with them without depending on a gigabyte of library code and encyclopedic knowledge of a thousand pages of documentation.
Of course, I don’t have any sort of answer to how to do this, but there’s gotta be a way… I definitely think that the call-out-for-more-information-about-an-element approach is worth thinking about a lot more.
There is a wonderful company well-equipped to help in this space and I have absolutely no vested interest in them whatsover by the way. It is Anvita. They have the web service skills, the HL7 skills, the CCR/CCD/CDA skills, and the ontological skills. Maybe we could enlist their help here?
Hi Adam, I have few comments. Getting agreement on semantics is not only hard, but doable, and also unavoidable. The usual way it happens in classical software engineering is that a bunch of software people attempt to interrogate domain experts, and write up their findings as various kinds of requirements – use cases, data dictionary etc. No matter how well or badly this interaction went, they _eventually_ commit to some semantics in their code. Most software in recent history is not very good because this process is so poor.
In a semantically-enabled world, there is no choice but to get to grips with semantic modelling. It can’t be avoided; the only choice is to put it earlier or later in the design cycle and tool chain. And getting the agreements is not impossible; see the openEHR knowledge site I indicated earlier – you will see some hundreds of clinical people agreeing (eventually) on real formal models of health information. What was needed to do this was a powerful formalism (ADL, aka ISO 13606-2), tools to build the models and a place to store and manage the collaborative model-building process.
Dynamically available ontologies are ok, but don’t help the construction of most health information systems (they would however help inferencing applications work better).
I don’t want to imply that the problem is easily solved, but we are making real progress with it in openEHR, IHTSDO and some other spaces.
[…] Bosworth posted an interesting read on standards, his take (simple, human readable, focus on known structured data, […]
[…] Talking to DC […]
Thanks for the nice article. I spotted it on “Joel on Software” and enjoyed reading it.
I think we should make the distinction between bottom-up learning and top-down learning when needing to learn a standard, and that the standards creators should cater to both of them.
Well, we’re working on the next version of CDA (R3). What would you have it look like. (perhaps contact me by email: grahame – at – kestral.com.au). (I am co-chair of the committee in charge of CDA)
Replied by email 🙂
Very good article. I’ve been involved with an ISO working group for a few years now and have been frustrated by not really progressing with the standards we work on. Your bullet points explain quite clearly why this is the case.
[…] https://adambosworth.net/2009/10/29/talking-to-dc/ a few seconds ago from Choqok […]
[…] Standard Sunday, November 1, 2009 8:33 am Author: Michael Wurzer | No Comments Here are some tips from Adam Bosworth (who worked on ODBC, XML, etc.) on how to build a successful […]
[…] Talking to DC, on defining standards […]
I strongly endorse the recommendations put forward by Mr. Bosworth. Software engineers, programmers and the subject matter experts that must “use” the platform, application and data must have a easy to use format, design and interoperable parameters that create function outputs with high propability for long term resilience.
Well written post and very thought provoking. I am glad that smart folks are wrestling with these issues.
It does strike me that http, ODBC, and XML were all very intentionally data agnostic standards, i.e., they were standards written to exchange data on all topics. Is it somehow different to come up with a standard for health records because they can’t be data agnostic?
[…] Talking to DC « Adam Bosworth’s Weblog (tags: standards xml) […]
I think that you are fundamentally correct and incorrect at the same time. Correct in your common-sense approach to healthcare data standards. Incorrect in not factoring in that this is the complex world of healthcare, not the more simpler worlds of say banking, finance, computer science. There is little chance of reducing to simplicity. HL7’s structured documents workgroup did a good job of this with the clinical summary (CCD), and has done a good job on many other usable clinical document standards. But that workproduct is just a summary. Remember too the creation of health care data standards is (unfortunately) in large part all volunteer work; it is an unfunded mandate. You get what you pay for.
Donald R. Kamens, MD FACEP FAAEM
Again see my reply elsewhere. I’ve actually built some of these standards having started and run Google Health. For getting medicines into Google from Wallgreens and CVS, for example, it works well. Works pretty well getting in Labs from Quest Diagnostics. Doesn’t work well for conditions becuse the source data in the EMR’s isn’t that useful since not really tagged carefully for tracking the progression of conditions over time since used normally for billing.And it worked because we kept it simple.
There is one key point to make about this: in the US, apart from medications and labs, most clinical data is unstructured in most healthcare locations – due to the dictation culture (to give an example, at Mayo a few years ago when I visited, 76% of all docs spoke their notes into a phone, and the words ended up being typed as narrative by a transcriptionist). In most of the rest of the world, the ratio is the reverse, and the majority of health data outside the US is structured. So there is a technical dividing point between US systems and non-US systems: in the dictation environment, you have to resort to NLP to make the narrative processable; in a structured environment you don’t (but you have to write more complicated data capture, querying and reporting software in the first place).
This key difference between medical cultures is the source of many massive project failures due to mismatches between vendor solutions and buyer sites. It also should prevent the existence of a ‘one-size-fits-all solution’ to the semantic problem. In fact, you have to have two solutions: one for unstructured, narrative environments, and another for structured data capture environments.
True, but the point I learned at Google years ago was that doctors when trying to learn about a patient first asked for the medicines that they were taking and then the test results. Most felt that that that data alone was incredibly indicative. Then they asked for EKG’s. Then images and conditions (which they didn’t trust because of the insurance/encoding issues). Put differently, mostly they wanted the hard data, not the stuff being clinically dictated. Then I discovered that this data was available electronically even when the doctor never uses a computer. Quest Diagnostics and Labcorp have the lab data. Surescript/RxHub has the medicines. So those of us in the industry can link to the data that many doctors say they really need even if no clinical data was ever recorded by a doctor.
[…] Talking to DC « Adam Bosworth’s Weblog (tags: standards software) […]
[…] Talking to DC […]
[…] guidelines for health coding standards. While the discussion is a little technical, the principles outlined are spot on. We should be […]
Nice post. One more thing that seems to help implementers is a simple to use web based validation service as e.g. http://validator.w3.org/feed/ for RSS, http://www.jsonlint.com/ for JSON or http://severinghaus.org/projects/icv/ for iCal.
[…] If you need a reason for this (other than plain old common sense), see this blog post. […]
Great discussion, and in particular Greg hits the nail on the head. Another way of looking at Greg’s point is to separate things into whether the standard speaks about the world of computers or the world of humans. Computers are simple, and it should be possible to make simple standards for their world. Humans are complicated, and trying to say anything meaningful about their world is inherently difficult.
The XML standard is a good case in point: it’s simple until it has to deal with the issue of the character sets for different human languages. Similarly with dates: as soon as a system tries to deal with them, everything becomes insanely complicated. It’s much easier to measure time in seconds since the computer was powered up than to measure it since some human “year 1”. (Fortunately, good libraries hide the date issues from most of us, most of the time.)
I think Adam’s “lessons learned” are valid for standards that deal with the computer world, but not all of them necessarily hold up when writing (computer) standards for the human world. I’d guess lessons 1 & 2 are the most affected.
Lesson 1 (simple) is not a realistic aim for health: no workable health standard could be simple. A more appropriate target is in 3 (focus), as indeed shown by the comment about health under 1.
Lesson 2 (concrete syntax easy to read & understand by developers) is probably impossible in most cases: the subject matter is simply not the kind of thing developers can understand, unless they also happen to be medical doctors in the particular area they’re looking at.
An interesting extra question is to what extent should the standard allow the same thing to be represented in different ways. Is it always possible to take input in a variety of forms, e.g. different names of the same drug, and yet store all of them as a single canonical form? For the computer world that would seem always to be the case; for the human world, not necessarily.
So, to be honest, I don’t really believe this. I actually have built both concrete banking standards (way back when I worked for Citicorp) and concrete health interoperability standards when I started/ran Google health as well was other concrete grammer standards on things like Google Calendar. Every group is convinced that their problem is uniquely hard/complex. In each case, a fairly simple model ending up working pretty well for the job in question. It didn’t solve all problems. No tools do and ones that try tend to be poor at solving any of them and, as I said in my post, very hard to make into a standard.
Completely agree that healthcare standards are too complex. People working in health informatics have misjudged the complexity of medicine and so over-complicated health IT standards. It would be an interesting sociological study to work out why this is; what is clear though is that the effect has been unfortunate. I recommend Barry Smith’s critique of HL7 for anyone interested in the mess created by over-complex standard making.
Good point to Barry Smith – I found
My experience with working with HL7 was in persuading them to adopt XML in the first place and move away from SGML!
The stakeholders in HL7 frankly revel in complexity. These guys hire folks with 2 PhDs to be on the standards groups – and they love adding more and more to cover off their sponsors needs.
The very notion of making it simple is in conflict with their sponsors goals – who are quite happy making something cost prohibitive to all but the largest corporations and protecting market share.
We see this over and again in the standards process – and yet EDI history tells us something else – that 90% of the messages exchanged use 10% of the components.
Would coming up with core simple templates for common typical tasks that can be broadly implemented would be too much of a radical approach?
Great article. It’s about a year I’m trying to say the same thing but without the precision of this article. HTTP, TCP, IP, TELNET, FTP, SMTP etc. are all great beacause they are simple, easy and they tend to do just one thing.
While it’s very sensible to make sure standards are simple and directed as you say, the requirements of recording health information are themselves already rather complex. So a standard way to represent and exchange health information that *works* is bound to be larger and more complex than the average software person wants to deal with. I really recommend you take another look at openEHR to see how this handles the requirement in a manageable way and involves the domain experts (clinicians) right from the start.
Having implemented HL7 and ASTM (for the lab equipment) intefaces for a laboratory based information system, and moving from that standard to the Real Estate Transaction Standard (RETS), I’d have to agree with your last post.
HL7 goes to the minuscule (with no transport), and RETS is STRICTLY web based XML to describe a database you wish to expose. RETS ONLY describes the data… not when it should update, etc…
The article is dead on. Shoot at the big targets. All of the little ones end up worked out by the folks doing the implementation anyway.
Aim small, hit small. Perfect example.. grep.
Allow me to inject a small note of cynicism here. The simplicity you advocate is generally achieved by punting on a lot of difficult “higher level” problems (aka, kicking them down the road). Unfortunately—and I think this is particularly true in medicine—solutions to the higher level problems are what is often needed to convince some crucial stakeholders (read physicians) that the overall effort is worthwhile. (See Donald Kamens’ remark on unfunded mandates and “you get what you pay for”.)
There is a reason why the marketing people at Allscripts unabashedly declare on their EHR home page:
“At Allscripts, our EHR solutions deliver proven success, because we put physicians first. The way we see it, if doctors don’t use it, nothing else matters.”
(And believe me, that statement grates on me too.)
PS: It should be said that the general software development community is much better acquainted with lines of business other than health care. They’ve been working hard on other classes of problems for a longer period of time, and other fields are farther along in integrating IT into their understanding of their disciplines. That word is important; clinically-oriented computer applications are still incidental, at best, to the training of health care practitioners, and that tends to make health care IT incidental to the way they practice and the way they view themselves professionally.
With particular stimulus from Steven Bedrick’s first comment, and my own work in this area, I must say that “democratizing” the logic and technology of semantic interoperability in health care is indeed a tall order, and it almost certainly requires advanced tooling built by highly skilled and knowledgeable people. (Think of designers of microprocessors, programming languages, compilers, IDEs, and visual design tools.) The reason that millions of people can effectively program modern computers and use markup languages is because of the tools they have at their disposal. They benefit from the ready availability of good and quick feedback on whether or not they are producing code whose output looks right, that actually works, and can be ultimately trusted in serious use.*
As the late, great theoretical physicist John Wheeler famously said: “The whole problem is to make the mistakes [and learn from them] as fast as possible.”
* This is not unrelated to Joel Spolsky’s critical views on current computer science education:
[…] Bosworth has some good advice for would-be standards developers in the form of a 7 item list. It is strangely reassuring to know that someone in the US Federal Government is calling someone […]
First paragraph: Warning. This is a rare nerdy technical post more for [engineers?]. It is about Healthcare XML standards.
In any case, it looks like a word was omitted.
[The post is great, and the constructively contentious discussion in the comments (oops – ☺) further illuminates the issues.]
I hope no one is losing sight of what the goals are. Further, as Mr. Bosworth has so stated so well, there has to be a fundamental set of rules and solid foundation. But let’s not kid the medical professionals here either.
Electronic Patient records, partial and sometimes complete medical history data exists or none at all. There are three components that critics, advocates and special interest groups are going to be focused on.
Cost of data entry & Source costs (system)
Usefulness as part of a Dr’s practice
Everyone can certainly achieve a specific set of goals for data collection and future use. It will be capable of improving patient care and potentially make it worse. If the medical community works within the guidelines Mr. Bosworth suggests, then the communities syllabus, methods and techniques for using the data will have significant impacts on patient care.
Two things will definitely collide and everyone will find undesirable as you folks move forward; doctors think engineers and programmers don’t listen and engineers who think doctors have too many different opinions.
I agree that the probability of being derailed is significant. Open source code is a part of the neutrality that will help reduce such risks and increase momentum.
Great post! I am the CTO of the Open Geospatial Consortium, a standards organization for the geospatial community. We have already shared your blog posting with the OGC Membership. Extremely relevant to work in the OGC. But, as Thelonious Monk says, “Simple ain’t easy”!
You might be interested in finding that Project hData released its initial specification set (which is still very much in its infancy, but progessing) at http://projecthdata.org/documents.html Overall, we are addressing quite a few of your comments and requests, and I would be very interested in getting your feedback on our overall direction.
We have also started to work on a patient-centric “federation” of medical records, which is outlined in our recent presentation at the recent NIST IT Security Automation Conference.
The MITRE Corporation
Happy to look it over. I used to work closely with one of your alum, a smart guy called John Schneider. Those were XML days.
Interesting – I recently met John at a conference – we were talking a lot about efficient XML.
Forgot to add that the group might be interested in the following standards based health application:
Towards Web-based Representation and Processing of Health Information: Methods
Agree with most of these. #7 – Hysteresis can be difficult with healthcare data though. Ignoring new data isn’t always safe. Imagine a specification that captures patient symptoms. Then imagine a future version of the specification that adds a boolean flag that allows the specification to say “does not have”. E.g. “Does not have chest pain”. Obviously for someone using the old version, ignoring the boolean flag would be dangerous. This can be managed by just saying “Don’t add things like that”, but it does mean that more caution is needed.
But if you don’t, then you simply get versioning. After all it is code reading this spec. The code can’t understand what it doesn’t understand. and versioning destroys interoperability.
Well, if it is only syntactic versioning, sure. But we have to use mechanisms and QA criteria to do safe semantic versioning.
I can relate to the arguments. Like a lot of tangible things, if users are expected to crack their head to understand/imagine without the authors helping with examples (“usability: FAIL”), the idea will fail no matter how brilliant it may be.
While it’s very sensible to make sure standards are simple and directed as you say, the requirements of recording health information are themselves already rather complex. So a standard way to represent and exchange health information that *works* is bound to be larger and more complex than the average software person wants to deal with. I really recommend you take another look at openEHR to see how this handles the requirement in a manageable way and involves the domain experts (clinicians) right from the start.
I’m getting a lot of these openEHR posts. I’ll take a look to see as soon as the next 2 hectic weeks are over.
One additional piece of information is needed.
openEHR represents a way of thinking a new paradigm that finds its basis in a CEN/ISO standard for the EHR (EN13606).
Part one is the stable ‘simple’ part that can be implemented via the openEHR specification at this moment, the other parts allow healthcare to define their ever changing information needs. Needs that can be implemented immediately without (re-)programming.
The EN13606 is mapped carefully to an ISO document (ISO 18308) that contains Requirements or an EHR architecture.
The complete standard is only a few megabytes of text.
Former chairman of CEN/tc251 Wg1
Hmm. A few “megabytes” of text? That’s a lot of text to read. In my opinion, the basics of a standard should normally be describable (not counting encoding details) in some small number of 10’s of pages. Ideally less. At 2000 characters a page, that’s a fraction of a megabyte.
Adam, agreed, megabytes is too open ended. That’s why we’ve strived with the CAM template approach to make it concise to provide the definition of the information exchange. Critically a CONTEXT mechanism is vital in ensuring accuracy and concise definitions. The problem with XSD Schema is that it describes all the possible permutations that may occur ever. While what you really need is to know concisely what the specific exchange should look like for your context and use. The WYSIWYG XML approach combined with content control rules that can be linked contextually. CAM allows you to ingest a schema and then generate the template that you need particularly.
Since the 13606-1 reference model contains of 42 pages, a few megabytes must be a typo. What probably was meant is a few megabits
[…] to be almost rock stars in their own right. My apologies! I also found these panelists blogs, Adam Bosworth, Sean Nolan, Wes Rishel, and Dr John Halamka on their thoughts after the […]
1) “Keep the standard as simple and stupid as possible. The odds of failure are at least the square of the degrees of complexity of the
Indeed……slap me with a piece of clue by 4.
Having said that…..don’t try & move smarts into the app layer. Instead strip down what is actually needed. All design classic are classics in part because of their stripped back to basics simplicity e.g. the T34 tank
3 Einstein quotes:
“Everything should be made as simple as possible, but not simpler”
“If you can’t explain it simply, you don’t understand it well enough”
“Three Rules of Work: Out of clutter find simplicity; From discord find harmony; In the middle of difficulty lies opportunity.”
I would add to that a quote which exemplifies why “Domain Experts” should not be allowed to get into the “document design” business…
“We can’t solve problems by using the same kind of thinking we used when we created them.”
There needs to be a firewall between the domain experts who can issue requirements (which can be shot down/sent back for review by design experts) & design experts who actually design the XML.
The complexity is usually caused by domain experts continually adding “stuff” which ends up with wheel re-invention (e.g. I have seen a number of “(x)html re-inventions for rich text display”) & other layers of mud to add to what can become a big ball of mud.
i.e. Domain Expert != Design Expert.
Domain experts tend towards complexity.
Design experts tend towards simplicity.
“I want to describe everything in the world”
“I may have to build something from this”
Guess which group tends to dominate most Health stds.
2) “The data being exchanged should be human readable and easy to understand.”
Slight teeth sucking on this one. At the end of the day it should be machine understandable & unambiguous.
Reading a few kilobyte message by eye is reasonable, doing so with a multi-megabyte one is less so.
3) “Standards work best when they are focused.”
Indeed. Were I to go into details I would start a firestorm so I will leave it at that.
4) “Standards should have precise encodings.”
Yup. Interesting comment :
“The government could play a role here by requiring NPI’s for all doctor related activities, SNOMED CT for all conditions, LOINC for all labs, and some encoding for all medicines (be it NDC, rxNorm, or FDB) and guaranteeing that use of these encodings is free for all use.”
………i.e. the terminologies etc should be “free for all use”….. Given I am working on terminologies at present (Read (V3,2 & 4byte), SNOMED, OPCS, ICD10, HL7 CTS)…you go gurl !
5) “Always have real implementations that are actually being used as part of design of any standard.”
ROFLMAO………..oh dear me….or you could put it as I do “Any std w/o a working implementation is a meta-standard”. or in a less jargon filled manner “w/o an implementation, it’s not a std, it’s a description of a std.”
6) “Put in hysteresis for the unexpected.” – Survivable s/w. This really is a system level thing & not a message level thing.
7) “Make the spec itself free, public on the web, and include lots of simple examples on the web site.”
Quite. But one can only include simple examples if (1).
[…] Talking to DC « Adam Bosworth’s Weblog “Let’s be honest, a lot of standards are written for purposes other than promoting interoperability. Some exist to protect legacy advantages or to create an opportunity to profit from proprietary intellectual property. Others seem to take on a life of their own and seem to exist solely to justify the continued existence of the standards body itself or to create an opportunity for the authors to collect on juicy consultant fees explaining how the standard is meant to work to the poor saps who have to implement it. I think we can agree that, whatever they are, those are usually not good standards. Health data interoperability is far too important an issue to let fall victim to such an approach.” (tags: healthcare politics software technology standards engineering html) […]
I believe your 7 key points here are enshrined in our OASIS CAM (Content Assembly Mechanism) standard and open source implementation work for simple interoperable exchanges.
Applying CAM templates to the government NIEM.gov approach has enabled us to create “ODBC for NIEM” implementers and shave typical development cycles from 800 hours down to 80 hours with dramatically simpler and consistent results. HL7 could definitely also benefit.
The key is a simple dictionary based approach to component reuse. The dictionaries are tough to reverse engineer from the existing XSD schema tar balls – but once available they transform what implementation engineers are able to do in constructing exchanges. We are also able to scan for potential show stopper issues latent in XSD schema and provide reporting of these.
For more on CAM – see camprocessor project on Sourceforge.net and our wiki and standard sites at oasis-open.org
Nicely written. I’m going to share this with my colleagues working on HL7. Item 5 is VERY important. Nothing can make a (potential) standard stagnate like a lack of viable implementations. Being focused is also really important. I loved the phrase “false precision”. It’s so easy to add a lot of detail to standards because it seems to make them more precise when, in fact, it just makes them more complicated.
[…] A note about standards Filed under: Development — Tags: Thoughts — Sébastien Ayotte @ 2:46 pm Just a quick thought about standards. You should try to keep them simples. […]
Well, you had me right up until point 6, when you brought up the “robustness principle, which is quite possibly the worst thing to ever happen to the World Wide Web.
We’ve known how to make a computer read code ever since the 1950s: you run it through a parser with strict grammatical rules, and if the code breaks the rules, abort with an error message. Do not pass go, do not collect $200, and for the love of all that is binary do not let some computer program attempt to read the coder’s mind and figure out what he meant to write!
Not following this tried-and-true principle for HTML is the reason we don’t have an HTML standard today. There’s an “official” standard that nobody complies with, then there are the “the way IE does it standard” (a different one for each IE version!), the “the way Firefox does it standard,” the “the way Safari does it standard” and a handful of others. And multiple standards is the same as no standard at all.
If we had made all web pages either parse correctly or not display anything, instead of trying to “be liberal in what you will accept,” maybe today we’d be able to write webpages that look the same in every browser without memorizing small novels’ worth of ugly hacks.
So I actually built a browser. IE 4. And I can tell you, from direct personal experience with 100’s of HTML authors that that way we also wouldn’t have had the web. In XML we did this with namespaces, maybe more elegant, maybe less. It is easy to criticize messy in favor of clean, but people are messy. Require syntactic precision from them and they just don’t use you. And unused standards tend to fail.
From Bryan Cantrill of Sun Microsystems, on designing water balloon launchers [ 🙂 ]:
(ACM Queue actually turned out better than that.)
[…] In a post based on his experiences with standards development, Adam outlines seven guidelines for good standards development https://adambosworth.net/2009/10/29/talking-to-dc/ […]
Here is another perspective on your original post (which I agree with).
The UNIX philosophy is “provide simple tools that perform simple tasks and can be combined in powerful ways”. This same philosophy also applies to standards.
A positive example is the MIME standard: it solves one problem and solves it well. Once you’ve done that, you have a standard (by the way, a “standard” can also be called a “tool”) that can be combined with other standards in powerful ways. In the case of MIME, those other standards would include SMTP, HTTP, etc., and the resulting applications would include email attachments, HTTP file uploads, etc.
Of course, there are plenty of bad standards that try to do everything, and these are the bloated, overly-complex, and unmaintainable failures.
Anyone who starts by talking about designing a “standard for health care” is already going down the wrong path, just as if they were trying to design a “tool that does everything”. Instead, the suite of standards we use for health care should be built from the bottom up, by humbly solving one small, discrete, atomic problem at a time (and reusing existing tools where appropriate) until one day we discover we’re actually able to do something useful.
How small “small” can be is limited by the complexity of the problem domain. But it should be a small as possible – even if it seems smaller than necessary at first glance (I’m sure early UNIX skeptics used to laugh about the usefulness of “cat”).
Looking at the existing health care standards that people are attempting to use as the “atomic” building blocks, some are better than others. Some should probably be replaced by one or more simpler standards, because they are not as simple as possible – though this is unpopular and takes courage.
[…] Talking to DC « Adam Bosworth’s Weblog Adam Bosworth on the standards process and its lessons. must read. (tags: adambosworth standards design architecture) […]
Adam, the critical win factor with any HTML browser is that it is totally forgiving – will take whatever slop you pass it and at least do something. May I suggest that that is not a good model for healthcare? However – lesson learned that you do want to make best effort to parse XML – before giving up – and certainly CAM approach is that – it is not “brittle” – the way that XSD schema is or Java tooling built off that (again I too have copious firsthand experience making major government XML processing systems work). Nor does CAM encourage bad practice – such as making all your exchange elements optional – so no one knows what is really required or not!
BTW – namespaces add to the complexity factor x10 – sigh. Unfortunately namespaces could have been done simple – but they were not – with all kinds of goofy side-effects for the unwary.
Again – with CAM approach we try and ameliorate this for you. Bottom line is that the WYSIWYG XML exchange structure approach makes a lot of sense – and I think you can see parallels to that HTML world – because there you never did achieve away to test rendering – or even agree on what that might be – but people could eyeball that web page and say “Yes – that’s what I want”.
Having it possible for business users to validate the exchange information easily is critical – and I don’t know too many business users who can read XSD schema and have a clue what it means!
Congratulations, your blog article is attracting some attention. So I feel the need to chime in. But don’t want to just repeat things other commenters have said. So I hope these references will work:
I must say that having been in the standards world for 15 years or so, I find most of Mr. Bosworth’s commentary rather predictable and it is not laden with practical insights on what to do differently.
Reminding us of Postel’s principle is worth it though, and there is one insight Mr. Bosworth relates in passing to another reply above, which I completely agree with: https://adambosworth.net/2009/10/29/talking-to-dc/#comment-5176
It is so true: versioning is the enemy of interoperability. And in the community of health care interoperability standards unfortunately there are so many voices calling for versioning and I cringe because it’s just making everything hard.
Well, your post has been read by all an sundry inside HL7. Opinions vary wildly concerning what should be made of it 😉
> If all I, as an engineer, want is to put together
> a list of medicines about a patient and send that
> to someone who needs it, then that’s all I should
> have to do.
really? well, that used to be where HL7 was, mainly, I think, but the healthcare eco-system has been migrating towards large co-ordinated programs, which generally are antithetic and even hostile to that statement. I feel that HL7 is trapped between these paradigms, unable to deliver something completely satisfactory to anyone.
btw, Above you said you had emailed me – but I can’t find record of it
I have re-emailed you by the way.
I am of course an HL7 v3 guy, having joined it 12 years ago and have dealt with the whole standards process, its politics, technologies and deep ideas. I like to make things work in smart ways and HL7 has helped me to do that. I know that we are often perceived as exceptionally difficult. Sure enough HL7 opponents will flock to this article and leave their URL references in passing.
I am stimulated by the implied criticism in this article, and I take it as a motivation to stir in my group to bring about more lightness around the what I believe are simple and still tremendously powerful and implementable concepts. I feel often in HL7 the good stuff is buried under the natural outgrows of big standards organizations: we have to be inclusive and most ideas make it in.
But I find it a little disingenuous of these standard musings to always pull Berners-Lee out of the hat, without careful reflection on what the analogy really can mean:
Separating transmission protocol from content is certainly nothing new or unique or insightful by any means. HL7 did that in around 1989 at the latest. And P-L-E-E-A-S-E do not bother me with the old and lame “envelope and letter” analogy. The content-envelope “pattern” is one of these Engineering all-stars that make for a quick score to get some heads nodding. But it is a completely empty engineering principles that is guilty of having created so much redundancy and extra work for engineers having to implement while returning zero actual value. One of its more recent outgrows are SOAP, the piling up of useless XML elements wrapping actual unspecified content in ways barely surpassed by HL7 message and control act wrappers.
The thing is, HTML had to answer to very limited requirements: encode text with a few features. But if you look at the simplicity of the requirements, and consider that the final standard for this today involves HTML, SGML, XML, XML-Namespaces, XHTML, CSS,
I am quite proficient in using these technologies and found some real gemstone in this (XSLT). But it strikes me that those people who never seem to get HL7 and the RIM (after having had it explained to them) are the same who don’t really get that full truss of technology behind HTML either, or relational databases for that matter.
So, what insight from the Berners-Lee reference can translate into what we do in healthcare standards? It doesn’t help that Mr. Bosworth points out his opinion that he thinks they got it right with CCR (in implicit opposition against HL7?) But let’s set that aside.
May be what one can actually learn from it is that we in HL7 need to give people the “Christmas-day quick-start experience”: Allow people to rip open the box, plug in the device in and turn it on and make a “beep” or hit the “demo key” before reading the 1000 page manual. Allow a stupid dabbling entry into the technology. Allow people to build a “Hello World” example that shows enough of the utility without burdening their early start. We could really deeply improve our standard if we allowed simple things
to be simple and grow complexity with need. Did I mention that I don’t think XML Schemas and the HL7 message and control act wrappers accomplish this? (And oh are they like envelope-and-content, that’s why I dislike them because I only care for content pure!)
How might this look with HL7 v3? It could look like this: “Tutorial to send my first lab result in 15 minutes.” Create a file “lab-test.xml” with the following contents:
Now you click on the lab-test.xml file and after fighting with the not-so-simple and still stupid browser's security settings that makes them ignore or refuse to process a simple XSLT stylesheet (goodness knows why) you may end up seeing something and go "yeah, I made my first beep. Let's go have cake!".
I have always written my standard documents to do a little bit of teaching people in examples and little snippets like the above how to become fluent in the language. No need to beat people over the head with the abstract data model, etc. Not right away. We at HL7 can be content that we have a flexible semantic model unlike any of our competitors. We can pull feature after feature out of the hat to answer to even the craziest requirement. But, we need to
regain the flexibility whereby people can start simple and do not have to be conscious about the advanced capabilities during their Christmas holiday tinkering.
Replying to my own post: there was supposed to be an xml example in here, but it got gobbled up by the blog. Let me try a test:
If this comes through as XML I post the example, if not, sorry.
Excellent, so, the “My first lab results in 15 min” example is this:
<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>
<?xml-stylesheet href=”hl7v3-simple.xsl” type=”text/xsl”?>
______<id extension=”08/15-4711″ root=”22.214.171.124.4.1.32366.15″>
__<code code=”2823-3″ codeSystem=”2.16.840.1.113883.6.1″
displayName=”Potassium Substance Concentration in Plasma”/>
__<value value=”4.5″ unit=”mmol/L”/>
Gunther, is the original XSD Schema for your transaction available some place for download?
I’d like to try create a quick CAM template that matches what you have here – so evaluate how simple, simple really is ; -)
Particularly by inspecting the XSD Schema one can see how well its setup to be easily interoperable across systems and exchanges.
Thanks. This is really helpful. To be honest, HL7 should be full of examples like this. Is this actually a valid Hl7 document?
And I have had to build actual health care interoperability standards and use them both when running Google health and now running Keas which inter-operates with a variety of partners including Quest diagnostics, Minute Clinic, and more. I remain unconvinced that the complexity or data model of HL7 are truly required when all that is needed is to share a list of medicines and test results. But I’m happy to listen and learn from examples.
[…] an interesting post on the effort in establishing XML standards for healthcare, and lessons learnt in previous standards […]
Excellent post. An example of a bad standard would be one using excessive mathematical formulas. “Average Joe” developers simply don’t have time to decipher them and convert them to a real-world implementation.
The general theme is to describe the bare minimum necessary to produce a useful and consistent output, rather than specifying every possible implementation detail.
[…] is an excellent post – Talking to DC – by Adam Bosworth, highlighting his testimony to the HIT Standards Committee. Adam has been […]
Adam gets it.
Re: point 7 – make the spec public. Please tell the Green Coffe XML people.
[…] a couple of days ago I was directed to a blog post on standards from the health information world which has plenty of wisdom for those of us in the spatial […]
Very good article. Very well written.