Webinar Video with Transcript: Getting SMART about C-CDA

Enjoy this interactive transcript of Josh Mandel’s popular webinar about SMART’s open-source tools, synchronized to the video recording. Now you can:

  • Follow along in the text as the video plays
  • Click any word in the transcript to jump to its location in the video
  • Search for a specific keyword and then jump to each occurrence of that word
  • View the transcript in “scan view,” similar to a word cloud
  • Print the transcript
  • Download the transcript



Slides are also available, as well as our open-source tools:

This talk is going to be amazing, because it’s going to cover how to generate high quality Consolidated CDAs. The Consolidated CDA is part of Meaningful Use 2. And Josh and team have put together a really neat set of tools, that your company or organization can use. Whether you’re creating these Consolidated CDAs or importing them and taking them into your system.

So with that, I’m actually going to hand it over to Josh. But one point of order to make is if you have any questions along the way, feel free to toss them into chat. I’ll be compiling them as the presentation goes on. And also Josh can see them, too. So if anything comes up while the presentation’s happening, he can address those. But in the meantime, Josh, do you want to kick if off and take it away?

JOSH MANDEL: Yeah, thanks so much, Ryan. Thank you everyone for joining. I’m really excited to get to share with you today some of the work that the SMART team has been doing with Consolidated CDA. We’ve got a really great crowd, and a diverse crowd, which is wonderful. So we’ve got folks from the electronic health record vendor side, we’ve got folks who are building health apps, who are software developers. We’ve got folks from provider organizations, from payers. So it’s a really nice mix.

I want to tell you just a tiny bit of background about SMART before we delve into the Consolidated CDA work. So SMART is a project at Harvard Medical School. We’re funded directly by ONC. And our high level goal is to build ways to hook third party applications into health record systems– personal health records, electronic medical records.

But today we’re going to be really focused on Consolidated CDA. And ways that you can implement Consolidated CDA for Meaningful Use Stage 2. So, quick outline of what we’ll do. Because it is a diverse group we’re going to spend just under 10 minutes on background. Talk about what Consolidated CDA is, where it fits into Meaningful Use Stage 2.

And then I’m going to talk about two really specific pain points around consuming Consolidated CDA and around generating Consolidated CDA. And for each of those I’ll show you some tools that SMART has built. Open source tools that are out there for the community that you can take advantage of. And then I’ll talk about one opportunity that emerges once you’ve implemented Consolidated CDA.

So all along the way I’m going to show you demonstrations of these tools to make them very concrete. And I hope to communicate to you the fact that we’ve been building stuff. And we’re excited about it. And we want your feedback. We want you to take a look and try it out. All the stuff is available, open source, and you can tinker on it. You can use it. We’ve got hosted demos of everything.

And then at the end I’ll give you a list of the resources that we’ve built. And give you a single to-do item, which, spoiler alert– is to fill out a survey after this webinar to give us your feedback. And we’ll end with time for open Q and A. So at least 10 minutes at the end of the hour.

So let’s jump right into it. Meaningful Use and Consolidated CDA. So, big picture– Consolidated CDA is a standard from HL7. So it’s an international clinical standard around writing down a patient’s summary health data. And Meaningful Use Stage 2 invokes the Consolidated CDA. There’s a number of places within Meaningful Use Stage 2 where you’re required to either produce or consume a Consolidated CDA for some reason.

We’ll talk about those places in two different perspectives. So this first perspective here is the perspective of a certified EHR system. In the middle of the screen you’ll see a certified EHR. And in this perspective we’re thinking about inputs and outputs. We’re thinking about times when documents go in and times when documents go out.

So in Meaningful Use Stage 2 there’s a couple places where your EHR needs to be able to take in a Consolidated CDA document. These are things like transition of care, when you’re receiving a referral from another provider or hospital. And then there are a number of places where your EHR needs to be able to spit out or export a Consolidated CDA. And these include again, transitions of care, if you’re referring a patient out.

If you want to share data with the patient, to allow them to view or download their data from the web or to transmitted electronically to a third party, those are all places where Consolidated CDA is used. And then a requirement that we’ll talk about in a little more detail for data portability and exchange.

So that’s the sort of inputs and outputs perspective of Consolidated CDA and Meaningful Use Stage 2. And I want to show you one alternate perspective which is certification and attestation. So for the folks on the line who live and breathe this stuff, apologies. But a really brief introduction to processes that are part of Meaningful Use.

So on the first column here I have certification. And this is the process by which an electronic health record technology become certified. And that says that that technology has certain features or capabilities that are required for Meaningful Use Stage 2. So the first column is all about the features or capabilities of electronic health record software.

The second column is attestation. This is about the way that people– providers and hospitals– use that software. And this says that the providers who have the certified software are doing certain things with it. So there’s necessarily a time axis here, which I’ve drawn along the x-axis. Because certification happens first. The product has to be certified before it goes out to customers who then use it and then attest to their use of it. And that time aspect is going to be important later in the talk.

So what are some places where Consolidated CDA is used here? Well, I already mentioned patient access to data. So each certified EHR, as part of the certification process, needs to show that patients can view, and download, and transmit their data to a third party using the software. So that’s certification.

And on the attestation side, it turns out somebody actually has to use that capability. So providers need to give patients access to test results and other health data within four days of the data being available to providers. That’s electronic access online half the time. So 50% of patients need to have that access within four days.

And then on the patient side, actually, patients are required to use it. So 5% of patients have to actually go and view, or download, or transmit those data in order for this to be called meaningful use on the attestation side.

Clinical summaries. So when a patient has an encounter, an office visit or a hospitalization, on the certification side every EHR needs to provide some kind of authoring environment. A way for a provider to write effectively a summary, a clinical note, that says here’s what happened. And here are the pertinent details.

So the certification criterion is that the EHRs need to support writing those notes, exporting them as C-CDAs. And on the attestation side, providers need to quote-unquote, “have provided,” either electronically or on paper, a clinical summary to half of the patients within a day of that encounter. So if you think about it, the timing there really suggests there’s going to be a lot of electronic use. And the standard for use there is going to be Consolidated CDA. So that’s clinical summaries.

Then transitions of care. When you’re referring a patient from one hospital setting to another, one clinic to another, the software needs to have certain capabilities. To receive one of these Consolidated CDAs as an incoming message, to display it on the screen. And then people actually have to use it. So on the attestation side, 50% of the time when you transfer a patient out you have to provide some kind of summary or transition of care record. 10% of the time you have to do it electronically.

And then this is a really interesting point here on the attestation side. You need to demonstrate– not every day, but at least one time– you need to demonstrate that you can export a document from your system, send it electronically to somebody else outside your organization who has a different vendor system. And that they can successfully receive that document. So that’s a one time successful demonstration that you need to show is well for attestation. So that’s transitions of care.

And then there are requirements around reconciliation. So when you receive a Consolidated CDA as part of a transition of care, every certified product needs to allow a provider to incorporate certain data from that Consolidate CDA right into the EHR itself. So it’s not enough just a store a big long document. You have to pull out certain kinds of individual facts from that document and reconcile them with the data that are already in a patient’s record. And those include medications, problems, and medication allergies.

So that’s the certification requirement. On the attestation side, you need to use the medication reconciliation component for half of incoming transitions– at least half. So certification says meds, problems and allergies. Attestation says you at least need to do meds.

And then finally there’s a certification criterion for what’s called data portability. And I call this a bulk export or a batch export. Which is a requirement that products must be able to export one Consolidated CDA per patient. Which is kind of a full clinical summary of that patient’s record. With the idea that this would help prevent vendor lock in. You can build export your data. So every product needs to be able to do that.

There is no accompanying attestation requirement that I’ve found along with that. So users don’t actually have to make use of that bulk export, but it needs to be there for them if they want it.

So that’s five different use cases with some implications for certification, some implications for attestation. And where is the SMART project coming in on this? What’s our focus? So I want to make this really clear. Our focus is on the core common data across all these use cases.

We haven’t been extremely concerned about the difference between a transitions of care and a data portability export. We’re focused on the data in those documents, the structured clinical data– medication lists, problem lists, patient demographics, allergies, labs, vital signs. We’re really focused on making sure that when those kinds of data are incorporated into a Consolidated CDA for any of these use cases– we want to make sure that those data are correctly structured, without ambiguity, with codes from the right coding systems. In short we want to make sure that those data are interoperable.

At a high level, before I delve into the details, we want to encourage folks to implement Consolidated CDAs with kind of an ecosystem in mind. A Meaningful Use Stage 2 ecosystem where there may be multiple different systems that are generating and consuming Consolidated CDA. And it’s not good enough just to sort of generating one and send it around. The data needs to be good if they’re going to be clinically relevant documents.

We want to make sure that those documents are clinically relevant. They’re not just somebody checking off a check box and saying, yes, I generated this thing. Even though nobody’s going to use it. This is a real opportunity to have high quality clinical data flowing. And Consolidated CDA is a great way to make it happen.

Let me go now from the high level to the really concrete. And say, what are the difficulties of actually doing that? So we’ll talk about two pain points. And the first one is how do you build? How do you build a certifiable product? And actually test your import pipeline and make sure it’s going to work? And this involves a lot of planning for unknowns.

What do I mean here? We’ve got over 700 vendor products that are certified in Meaningful Use Stage 1. I don’t know if we’ll have more or less in Meaningful Use Stage 2, but there will be hundreds. And when you have hundreds of different vendors implementing to the same standard– you have a situation where hundreds, or maybe 1,000, flowers will be blooming here. And there are going to be certain quirks in the way that Consolidated CDA is implemented from vendor to vendor.

And if your job is to build an import pipeline that can be robust and tolerant to these quirks, and import data from wherever– well, how are you going to test that? And how you going to have any assurance that it works?

So I’ll give you a couple quick examples. For example, coding systems. There’s many places in Consolidate CDA where a coding system is required. There’s a lot more where a coding system is sort of recommended, which means it’s optional. Unless you understand something about what different vendors are doing, and how they are coding their data, you’re going to have a hard time importing those data in a consistent way.

Another really concrete example is where, in a Consolidated CDA document, will you find a patient’s chief complaint, the reason that brought them to an encounter? In a transitions of care working group, in the standards and interoperability framework, they actually looked at this question. And they said, you know what? There’s actually seven places in a Consolidated CDA where it would be reasonable to write down a chief complaint. So that means maybe there are seven different places to look.

And if you’re trying to build a robust import pipeline, do you look at all of them? Do you look at the ones that are the most common? What if you find information in more than one place, which one do you trust? It’s hard to build an important pipeline when you don’t know how these documents are going to look. And then the important point here is we want to help vendors who are building these import pipelines expose errors at design time.

So remember when I talked about the difference between certification and attestation, there was a time component. If it takes nine months to a year between when a product gets certified, and when users start to think about attestation? Well that’s a real issue. Because users might not actually hit these real world issues, working with data coming across the wire from disparate vendors, they might not hit them until a year after your product has been certified.

And wouldn’t it be nice if you could start to get at some of these issues earlier, at design and test time. Rather than later when the products are out there in the market. It’s be easier to correct errors early on. There will be some of both. But it would be nice if they could shift the balance earlier.

And then finally, there’s this notion of a connect-a-thon, in IHE, integrating the health care environment, where vendors who are building products that fulfill a certain specification will physically come together and co-locate for a day or two. And try out their products and make sure they inter-operate. And fix the bugs when they don’t.

And that’s a really powerful paradigm for building systems that interoperate in the real world. And it’s an important sort of backbone of the way that we get interoperability in today’s health care environment. But wouldn’t it be nice if there were more things that were sort of like a connect-a-thon that you could do 24/7, and have a tighter feedback cycle. You could learn faster and not have to wait until just a few times a year in order to try these things out.

That’s the pain point. It’s building a robust import pipeline. What do we have? What’s SMART’s prescription for this pain point? Well, we have something which we think is going to be very useful. Which is a very simple, lightweight repository of sample Consolidated CDAs. These aren’t samples that we’ve written ourselves. These are samples from real vendors. These are actual exports out of certified and products that are getting ready for certification today.

Right now we have an open repository on GitHub. And we have cross-vendor examples, including documents from Cerner, from NextGen, from Greenway. And NextGen and Greenway at least are already certified for Meaningful Use Stage 2. So you can start to see what Consolidated CDAs from certified systems will look like. And hopefully we have one on the way from Allscripts as well. And I want to encourage everyone on the phone to get excited about this repository. And think about adding a document to this repository, if you’re beginning to generate documents, as well.

So what is this? It’s an open repository. Actually, it’s immediately a useful public good that you can take and employ today. And you can employ it in a number of ways. One thing you can do is simply to look at the documents and learn from them. Vendors do very interesting things, which aren’t always that well documented in the specifications themselves.

For example, NextGen has a really nice way in their Consolidated CDA of linking together medications and problems. Linking those things back with the encounter during which the medication was prescribed or during which a problem was noted. Most Consolidated CDAs don’t define those links. But if you’re interested in doing that you can look at NextGen and see an example of how to do it. So you can learn from these documents just by reading them.

But then the really exciting thing is you can start to use these documents to test the import pipeline that you’re building. So you can actually begin to build up some automated testing within your own organization using a growing set of real world examples as the inputs to your pipeline. To ensure that your pipelines are robust. So we can think about automated testing and analysis of these documents in sort of a big way.

I want to show you a quick demo of this repository. As I mentioned, it’s really quite lightweight. It’s an open repository. It’s hosted at github.com in the Children’s Hospital Boston section. And we have a repository called sample C-CDAs. Right now we’ve got a folder for each vendor. And that includes Cerner, and NextGen, and includes Greenway as well. And you can navigate through these folders, and just click on one and see examples of a document.

And if you’re interested in what an export summary from Greenway looks like– I’ll show you one that’s formatted here. You can see, here is the actual XML content of this document. And you can download it right here from the web. Or you can check out the git repository, and keep it in sync with the machine inside your local development environment.

The idea is, you can look through the documents from different vendors. We also have examples from HL7 itself. And from the NIST scripts that were written for Meaningful Use Stage 2 certification. And again from the transitions of care work group. Every Consolidated CDA example we can find, where it’s available for redistribution, we’re including in this repository. We strongly encourage you to use it and think about contributing if you’re beginning to build documents.

I want to emphasize the point that your documents and your export pipeline, they don’t need to be done 100% in order to contribute the document. For example, when Cerner was just getting started with their Consolidated CDA implementation, they shared a document. It wasn’t complete. But it had medications and problems and patient demographics. And that was a really great start. Now we’re able to track their progress as they work through the rest of the implementation. So early work, and very exciting too.

That’s a pain point all about importing documents. And the solution there is simply an open repository where you can learn, and make your import pipeline more robust. Pain point number two is about exporting documents. And I put this in quotes here. I say, getting Consolidated CDA exports “right.” And now, right is in quotes. Because what does it mean for export to be right?

One definition of right might just be that it’s formally valid. That when you pass it into the official validation tools it comes out without any errors. And that is one definition of right. But I don’t think it’s the most useful definition of right. It turns out that generating a document is complex. And the formal validators only pay attention to certain kinds of things. They only catch certain kinds of errors.

And I’ll emphasize that point here with some examples. Generating these documents is tough, because you’ve got to do a lot of things. You’ve got to get your coding in this sort of standards-based fashion. So that you’re using, for example, RxNorm codes for medications, and LOINC codes for labs, and SNOMED codes for problems. Because there’s this dense, nested XML structure, with a lot of inter-relations between elements, that all means that there’s lot of opportunities for contradictions. I’ll show you some examples of contradictions in a moment. And as well for ambiguity.

There are many places in a Consolidated CDA where you get to make a decision between whether to represent data in a very flexible, sort of human readable way, versus a highly structured more machine friendly way. And Consolidated CDA provides a way to do both. But you often can make a choice here. And that choice means an opportunity for some ambiguity.

Now there are best practices around how do these things, right? And the set of best practices is growing over time as implementers gain experience. But it can be hard to learn about these best practices. You have to follow blogs, you have to follow mailing lists. You have to sort of be in the know to be aware of them. And so those practices can be opaque.

And the result of all this– of the complexity, of the opportunities for ambiguity, of the fact that best practices can be opaque– the result of all of that is errors go undetected. Each of these words here is a link. And I just wanted to show you a couple of examples of what kinds of errors today go undetected from the standard validation tools.

I’ll start off with a very simple example. This is an entry in a patient’s problem list. And the XML is at the bottom of the screen here. I won’t get into the details except to say this problem here is supposed to be essential hypertension. So high blood pressure without any particular reason for it. And it’s been assigned a SNOMED code here.

Well, it turns out that’s not actually a SNOMED code for essential hypertension. Turns out that’s not actually a SNOMED code at all. So this is a formally valid document. It passes the official validation test. But it has this code in it which actually isn’t a SNOMED code. And it looks like it was probably just a typo. Somebody left off a zero as they were entering this data in. Maybe that was done by hand, maybe there was an error in database records somewhere.

But passing this kind of document around over the wire, and transmitting it from one organization to another, is going to lead to confusion downstream. It’s the kind of mistake that we would like to detect ahead of time. And know that there was this issue in our export.

Another example of this kind of problem, I mentioned that when there’s these complex nested XML structures, there’s opportunities to write down things that are contradictory. So this is another entry in a problem list, in Consolidated CDA. And here every problem is associated with what’s called a concern. Which represents a clinician’s concern about that problem.

And here we have an example where there’s a disagreement about the status. So this XML snippet here shows a problem whose concern is quote-unquote completed, meaning it’s no longer a concern. And yet the problem has a status observation which says active. So this says there’s an active problem, but the concern has been so-called closed. This is probably not what is meant. It’s probably an error here to say the concern is completed about the patient’s active asthma.

And it would be nice if we could surface that error. And say, hey, is this really what you meant? Or is there something wrong with the document? And again the official validators today don’t pick up that kind of contradiction. And it would be nice to be able to surface that kind of thing.

And then one more simple example, which is units for vital signs. So this is a patient’s body mass index. It’s an observation that the body mass index is 27.9. And the units for body mass index are kilograms per meter squared. In Consolidated CDA the specification says that you’re supposed to use UCUM codes. The Universal Code for Units of Measure. And the correct code for kilograms per meter squared looks like this, kg slash m2. Now in this document, it’s almost that. Instead, you actually see instead of a 2– there’s this superscript 2 character.

That’s really nice for a human readable document. But it’s not nice in the sense that it doesn’t adhere to the UCUM standard. And if a machine parser is trying to treat that field as a UCUM field, it’s going to fail. So again, this kind of error, which you might notice as a human being reading the document, machines are going to notice it. And it sure would be nice if we could pick that kind of error up early on in the process. So we didn’t send documents over the wire that have those kinds of errors.

That’s the pain point about getting these exports right. And there’s a lot of ways to have errors. How do we address this? SMART’s prescription for pain point number two is Consolidated CDA Scorecard.

The Scorecard is a web-based application. It’s open source. You can run it through a hosted version that we have online, which I’ll show you. And it promotes best practices in generating these consolidate CDAs. It doesn’t do everything, but it focuses on a few key areas, including code validity. So it makes sure that when you use RxNorm codes, and LOINC, and SNOMED codes, you’re using real codes that exist. I’ll show you more about that in a moment.

It checks a few structured elements, including medications, problems, vitals, lab results, and smoking statuses. And it looks for these kinds of contradictions that we’re talking about. So the way it’s implemented is, there’s just a set of what we call rubrics, which are individual checks that look at a document and determine whether one single best practice is adhered to or not. And then you get a score, which is kind of a holistic view of how well your document is doing. And it’s really meant not a hard and fast– not as a cold assessment. But it’s a way to get a sense of how your document is doing, and get some pointers about some areas where you might want to take a closer look.

Again, this is an open source tool, meaning that anyone can contribute code to it or contribute suggestions for new rubrics. And help promote the quality of these documents. And again, for this, I’d like to show you a demonstration of what the Consolidated CDA Scorecard looks like. This is again a live demo. It’s being hosted at ccda.scorecard.smartplatforms.org.

And the interface here is pretty simple. You paste the document in to this text box here, and you click on the Score Me button. And I’m going to start off just by showing you an example. There’s a sample document that’s baked in to the Scorecard app. And you can see what it looks like once you click on the Score Me or the Sample button. You can say, OK. You’re C-CDA’s overall score was 50%. And we’ll talk about what that means.

I’m going to point out that there’s a button here that says Share Your Consolidated CDA with SMART Community. And we encourage you, again, even early on in the process, to think about generating one. If you’re generating these documents, think about sharing them. Think about contributing them to the repository that I showed you in the last deck.

But let’s dig in to the Scorecard results. So they’re organized by sections here. And you can see there’s some general results here. We didn’t get any points on this rubric about codes should be valid. SNOMED CT, LOINC and RxNorm codes should be valid. And we lost points here.

Why? Well, it turns out there were three codes which claimed to be SNOMED codes but which didn’t actually exist in the current edition of the Unified Medical Language System. So that’s one kind of error that we can catch.

Let’s go down through the results and see what kinds of other errors there are here. Well, here’s one in the problem list. Some problems that were supposed to be coded with a subset of SNOMED weren’t. So what happened there? Well, two of them we already know about. These were the SNOMED codes which didn’t exist. So those clearly were not correctly coded.

And then here’s a problem in the problem called coronary artery disease. And there was a SNOMED code for it, but that SNOMED code actually is marked as obsolete in SNOMED CT.

And you can learn more about this by clicking through on a link to a resource called Bioontology. It’s hosted out of Stanford at the National Centers for Biomedical Computing. And they have a hosted copy of SNOMED and you can dig in, and in an interactive way understand what these code mean and how they link to each other.

So here, for this coronary artery disease code, I can see very clearly the obsolete flag is set to true. Because this is an ambiguous concept. And then there are some suggestions about what kind of other codes you might have meant, instead of this one here which has been marked as ambiguous and obsolete. So that gives you a heads up if you’re generating these documents to think about using the current set of codes that are available.

And the finally, there’s a code here for generalized obesity. It exists in SNOMED. It’s not obsolete. And yet we still flagged it here, because it’s not part of one of the recommended values sets for problem lists. So Consolidated CDA recommends not just that you use SNOMED CT, but they recommend a couple of subsets of SNOMED CT, which are called value lists within SNOMED CT, or problem lists. And this particular problem, generalized obesity, doesn’t show up in any of them. It doesn’t mean it’s wrong, but it means it’s something you might want to take a closer look at if you’re evaluating the properties of your export pipeline.

Let me just show you one more thing here, which is vital signs. This document didn’t get any points for vital signs. And the reason for that is quite simple. This document didn’t include any structured vital signs. It had a vital signs section. But it only included a human readable version of those vital signs, and no machine readable version at all. So that’s an example where we want to highlight the fact that even though this document looks really nice when it’s displayed on a screen for a clinician, it doesn’t have the structured data behind the scenes to back it up.

So that’s the example that’s built into the Scorecard. I’ll show you one more example just to give you a sense of how it works. So I’m going to click on– back in our sample C-CDAs repository– I’m going to go look in our NIST samples. And I’m going to include the ambulatory patient record that NIST provided. And I’m going to copy and paste the raw XML of this thing back into the Scorecard.

So forgive me for switching tabs all over here. But I’ve just got the XML for a new document, and I’m going to score it. And I want to show you this document in particular to focus on the vital signs that are recorded in it. And the reason is as we scroll down to the bottom of the report and we can see vital signs, we talked about the way that units are coded for vitals.

And the score card has some very simple checks in place to make sure that when you’re recording vital signs you deal with the appropriate units. So most of the vital signs in this document actually don’t have the right kind of unites. Some of them are correct. There were some blood pressures in here that were right.

But here for the weight and for the height, the units that are recorded in this document are nice, friendly, human readable units. But they are not the official UCUM standard units for things like inches and pounds. Instead they’re just a string I-N and L-B-S. Which isn’t quite right.

So again we surface those errors here very clearly, so as you’re developing these documents you know what you should be doing. And you can tell when you’re doing it wrong.

So that’s a quick introduction to the Scorecard. You can take your own documents, and paste them in here, and see how they score. I’ll just emphasize that there shouldn’t be any protected health information shared with this service. And we do log submissions, so we have a log of what’s submitted to it. So that’s the Scorecard.

We encourage you to try it out. We certainly encourage you to point out any errors and bugs, and suggest new features for it. The Scorecard right now has about a dozen rubrics build in. We’ve had suggestions and added rubrics as we’ve gone along the last couple of months. And that’s been really exciting, to see it grow and become more powerful and promote higher quality documents over time.

So that’s the demo for the Consolidated CDA Scorecard. That’s two pain points and two open source solutions to help ease the burden of importing and exporting Consolidated CDAs. Now I want to shift gears a little bit and talk about a novel opportunity that emerges once you’ve started to ease these pain points. Once you’re doing imports and exports of Consolidated CDAs, you’ve invested something in building that infrastructure.

And I want to show you something you can do, a way to leverage that investment. So I’ll just jump into a demo and show you. So SMART platforms, I mentioned at the beginning of the webinar, is all about building third party apps that hook into medical data. And I want to show you a way that we can hook apps in to Consolidated CDA documents, or to the data in Consolidated CDA.

So here’s a demo. It’s hosted at ccdareceiver.smartplatforms.org. And I call it a C-CDA Receiver. I’ll show you why. It looks a lot like the Scorecard. The interface, in fact, is the same. You paste in a document. But instead of saying, give me a score, you can actually run apps on the document.

So what apps do I mean here? Let me just show you an example. The Receiver comes built in with a set of patients with some sample data. And I’m going to just dig in and show you what they look like. So I’m going to load a pediatric patient here, Kimberly Woods, who was born in 1999. And when I choose a patient record, I get a set of apps.

And I’m going to show each of these apps very quickly, to give you an idea of the sort of breadth and scope of the things we’re thinking about. And give you a sense of how easy it can be to extend the functionality of the system by adding these kinds of lightweight web applications that can take advantage of structured data in C-CDA. So I’m going to show you these apps just in the order that they are on the screen here.

So the first one I’ll show you is called Blood Pressure Centiles. And this was an app that we built for clinicians at Boston Children’s Hospital who were looking for better ways to track pediatric patient blood pressures. So for those of you may not be aware, the way that you diagnose high blood pressure in children isn’t simply by looking at the raw millimeters of mercury numbers the way you would for an adult.

But instead you need to interpret those numbers, taking into account how old the child is, and their gender, and also how tall they are for their age. So you have to know something about where they are on a growth curve to interpret blood pressures. And so you either look those numbers up in some tables, or you plug them into a calculator, and that’s a lot of work if you have to do it ad hoc for every patient.

So we built an app to do it automatically. I’m going to give this app permission to access a patient record. This app automatically can display these blood pressure percentiles, doing the logistic regression calculations, and showing you the values not just as millimeters of mercury here on the y-axis, but also as percentiles inside of these circles here. So I can tell at a glance that this is a patient with normal blood pressure. Even if I didn’t know ahead of time that 111 millimeters of mercury was a normal systolic value for a nine-year-old girl who was 152 centimeters tall.

So this gives you an at-a-glance, integrated view of the data in a clinically relevant, contextual way. And you don’t have to enter numbers into a calculator. All the data are already there in a Consolidated CDA document and this app can take advantage of them. So that’s a quick look at the Blood Pressure Centiles app. That’s available as an open source app today.

I’m going to show you a growth chart, which is a work in progress. That’s just finishing up and it’s going to be available this month, in April, as an open source application that anyone can take and incorporate into a system. And this is a growths chart app that takes into account a lot of current best practices. It was designed in consultation with experts in usability design, with experts in pediatrics, who built in a lot of great features.

And I won’t have time to show you really many of those features at all. Except to say it’s a really nice looking growth chart app, where you can actually compare not just one built in growth chart– the default here is the CDC– but there’s actually disease-specific growth charts. So I can say, what if this is a patient with Down syndrome? How would that change? And I can even say, how would that compare the CDC recommendations? With the CDC normal curves?

And I can overlay multiple charts, and I can compare a patient’s length, weight, and body mass index in a sort of clinician-friendly view here. And it also comes with a parent view. Where I can see, at a glance, here’s a handout for a patient’s parents. It says, for example, here’s your child who’s 152 centimeters tall. Here’s the child’s BMI, and you know what? The child’s actually overweight.

So here’s how tall they are now. Here’s how tall the parents are. Here’s how tall you might expect the child to be. So you have an at-a-glance view of how your child’s growth is progressing, and whether their current weight is healthy or not, given everything else that we know about their history. So that’s a really quick preview of the SMART Growth chart app. Which again is an open source app that’s available this month.

I’ll give you just another 30 second view of the Medication Reconciliation app. This wasn’t written by the SMART team. This was written by a combination of folks at Maryland and also at University of Texas who are working on the SHARP area C cognitive project. And they’ve built a demonstration app for medication record conciliation.

So it takes in two lists here. And we’re feeding it one list that comes from one Consolidated CDA and another list that comes from a different consolidated CDA. And the idea here is it helps you with medication reconciliation. So we’ll automatically notice, by comparing the two lists, products that are either identical or similar. And it’ll help provide an interface for clinicians can select the medications that they want to preserve moving forward, and cross out the medications that they don’t want to preserve.

And this application is fueled by high quality, structured, coded medication data. And so it knows, for example, all about RxNorm, and which ingredients different products have in common. And it can use those kinds of data to make assessments about whether products are similar, or identical, or whether they share a treatment intent. And it can help bring to the surface, where a clinician can easily see it, places where there may be overlaps or gaps in the difference between two medication lists. So this app isn’t making decisions but it’s helping a clinician make decisions better, using structured data.

And the last example I’ll show you of an app actually is going to be for an older patient, not for a child. It’s a cardiac risk app. And this is an app which– we didn’t design it. It was designed by Dave McCandless and Stefanie Posavec for Information is Beautiful. And they entered it into a challenge in Wired magazine. And they won this challenge.

And they made their design available under a Creative Commons license. They published the design as a PDF. And we took that PDF and turned it into a live app that’s fueled by health data.

So this app calculated what’s called a Reynolds Risk Score. It’s a ten year risk of having a heart attack or stroke. And it automatically takes into account a patient’s age, and gender, and blood pressure. And then you can hist some flags here, to say whether this is a smoker. And whether there’s a family history of heart disease. So you can say, for example, this 68 year old man has a 40% chance having a heart attack or stroke before his 78th birthday.

So that’s interesting in it’s own right. And it looks nice, and it’s easy to discern that information. But it also becomes not just a visualization but a useful counseling tool. Because suddenly you can say, what if you get this patient to quit smoking? And get their blood pressure sort of modestly under control? Maybe down to 125? You can cut your risk in half. You can take yourself down to a 20%, or an 18% chance of having a heart attack or a stroke in the next 10 years.

So these kinds of apps can be useful not just for clinicians, but for patients as well. And even to help clinicians share data with patients in a way that makes sense, and that’s friendly for patients to understand.

And the last app that I’ve put in the list here isn’t really an app at all. It’s a placeholder. It says, your app here. And this is actually a neat development trick for the software developers in the audience. This is an app which is hosted a local host port 8000. So whatever your local machine is, if you want to try running an app, you can try running an app right in the context of this web hosted resource.

And if you give it permission to access a patient record, it’ll look for a local web server running. I don’t have one right now. So it just said, I couldn’t find a local web server. But if you do spit up a web server and host an app there, you can actually try out your app on your local machine without having to install any of the consolidated receiver machinery. You can just run it in a lightweight fashion right over the web. So we think that’s pretty exciting, too.

The last thing I’ll show you with the Consolidated CDA Receiver is the really cool part. You don’t have to use only the apps, or only the patient records that we provided for you. Remember, this demo started with a text box where you could paste in your own data and see how it works. And we think this is really exciting.

This suddenly becomes a tool for evaluating the documents that you are generating. It’s complimentary to the Scorecard. It’s complimentary to the official validation tools. And I want to show you how it works.

I’m going to switch over to get a new Consolidated CDA on my clipboard here. And we’re very grateful to the folks at Greenway, who contributed not one but two Consolidate CDAs to our repository here. And one of those Consolidated CDAS actually has built into it– has some vital signs. Has historical heights and weights for a patient that we can use to paste the data in and start to see what the data look like in one of the apps.

So I’ve pasted in a Consolidated CDA. In this case it happens to be from our sample repository, but this could be one that you’re generating, too. And I just say, Run Apps. And once I’ve chosen to run apps– let’s see which one I want. I’m going to try the Growth Chart app, because this patient has some heights and weights in it. I’m going to give the app permission to access this record, which I’ve just created from a Consolidated CDA.

And I can see if I zoom in here, here’s a patient who has a 177 centimeter height, and an 88 kilogram weight. So there’s not a lot of rich data. We don’t actually see a whole curve because there’s just one or two points in the record. But I can start to see, for example, this is an obese patient. It looks like this is actually not a pediatric patient at all. He was born in 1962. Well, that’s interesting.

What if I want to play with the data? What if I want to switch things up? Well, I can switch right back to the document, back into the paste box here. And say, let me update the patient’s birth date. What if the patient wasn’t born in ’62? What if they were born in 2002? What if I run apps on a patient like that? And, boom. I can run the Growth Chart app again, now on this new version of the patient record that we saw here.

Excuse me, the perils of live demos. Let me try running the Growth Chart app again on this new version of the patient record, and authorize it to access the record. So now, when I see the patient here, I’m going to see– this is the patient now, for example, viewed in the parent view. Now I see this as a patient who was born in 2002. And there times and BMI percentiles are adjusted accordingly. They show up under the patient’s current age of 10 years instead of 50 years.

So that’s an example of where you can modify the data that you paste into this Consolidated CDA Receiver and see how those modifications come through in apps. And they’re really excited about this tool. Because it’s lightweight– you can try it out on the web. You can install it locally if you want.

But there’s a resource that’s available for the community to try out their data in apps. And to make sure that their data are correctly structured. And that the right codes are in place, the units are in place. Make sure that when things are correctly exported, you can begin to do some really cool stuff with it.

So that’s the Consolidated CDA Receiver. Again, it’s an open source tool, a lightweight way to integrate web apps. It’s available in a public demo. And the code is available again on GitHub. We encourage people to check it. Try out some of these apps on our sample patients. Try it out on your own data. I want to emphasize the point that we’re building concrete steps, and this is real.

So here’s an example of integration we did a year ago at Boston Children’s Hospital. This wasn’t using Consolidated CDAs, but it was using another version of the same SMART app technology, the SMART API. And what we did was we took the Cerner system there. And without any help from Cerner the organization, just working with Boston Children’s Hospital, we added a new tab into Cerner using some of the built in extensibility mechanisms that the Cerner PowerChart environment provides. We added a tab here called BP centiles.

And so a user– when a clinician is looking at a patient record, they can just click on the BP centiles tab here. And what happens is a new window pops up with the blood pressure centiles app already located the patient they’ve been looking at. And they can see immediately, at a glance, the same short term view that I showed you in the previous demo.

So that was really cool. Because now the clinicians at Boston Children’s have kind of an integrated way, within the Cerner environment, to launch one of these web apps. And we think in general, there’s a lot of different ways these web apps can be integrated, either tightly or loosely. But I think an exciting opportunity, when you’re generating high quality data in a structured way, it’s a great opportunity to hook those data up into apps.

So I want to sum up by saying– I’ve showed you a bunch of resources today. Kind of a fire hose of information and demos. And the real point that we want to convey is, we’ve been building stuff. And it’s out there. It’s for you. It’s open source. You can use it. We really value your feedback on it. We’d like to engage and make these tools better. And start to push the idea of high quality Consolidated CDA imports and exports, and think about running apps on these data.

So here are three key resources that I showed you. The repository of sample C-CDAs, the Scorecard, and the Receiver. I’ll just emphasize that there’s some action items you can take for each of these. So the sample repository, check it out. Learn from the examples we have. Use the examples to test the robustness of your import pipeline.

And please, by all means, contribute your examples. They don’t have perfect. They don’t even have to be done. You can contribute early work and update it often, so the community can follow your progress.

The Scorecard– you can take this and learn about best practices in Consolidated CDAs. You can paste your own documents in and get some pointers about areas you might want to focus on for improvement. And again you can contribute to the scorecard by suggesting new rubrics, and sharing some best practices with the community. So we can build it into the scorecard, and get some automated error reporting, and surface those best practices. So you don’t have to wade through XML to understand whether this is a good document or a bad one.

And then finally the Consolidated CDA Receiver, which is a really exciting opportunity for integrating high quality data with applications, which can be written as lightweight web apps. You can just check that out on the web. You can paste your own data into it. And you can start to build apps that integrate with a very simple, JSON based, resource oriented API that we have documented in the open source repository for this application– the Receiver, as well.

So that’s a lot of resources. We’ll send out all these links so that you can keep track. And the last thing I want to do is leave you with an action item for today, which is to fill out a post-webinar survey. We really want your feedback on this stuff. We’re excited to share it with the community. And we want to build things that people are going to use.

So we’ve been heads down, building a bunch of this stuff to be able to tell a story. But now we’re heads up, looking around, trying to figure out who’s interested? How can we use it? And how can we make it better suited to folks’ needs?

So we have a survey. The link is on the screen right now, and we’ll keep it on the screen for the rest of presentation. We’ll also send out this link to you via email, so you can click on it that way. Please fill out the survey, it is short. There are some really concrete questions. There are a few open ended questions, and we’re looking to get your feedback. So thank you so much for your attention, and we’re left with at least 10 minutes for some open Q and A time.

I’m going to turn it over to Ryan to help me with moderating the questions, because I haven’t been able to read them as they slide across my screen.

RYAN PANCHADSARAM: Thank you so much, Josh. This was fantastic. OK, so first of all, the number one question we got was, will this meeting be recorded and shared afterwards, the slides and presentation? And I believe the answer is yes, of course. And how will they get it, Josh? Will we just send it out again?

JOSH MANDEL: We will announce it on our SMART platforms blog, and we will email the folks who attended with a link to that recording, as well.

RYAN PANCHADSARAM: Perfect. So a lot of questions happening. I’m going to start with some of the higher level ones, then go into some of the deeper ones. The first one that came out was, what is Blue Button Plus, and its relation to V/D/T?

JOSH MANDEL: So that’s a great question. I’m happy to answer it. But I probably don’t know anybody who is more qualified than you, Ryan, to answer that question. So let me know.

RYAN PANCHADSARAM: Of course. The simple way to look at it is, Blue Button and Blue Button Plus. Blue Button has been the symbol for patient access. Blue Button Plus has been a set of standards that describe how a record should be structured, and how it should be sent. And you really will see Blue Button Plus and V/D/T next to each other a lot of times. And the reason why is, Blue Button Plus is a way to implement and to meet the View, Download, Transmit requirement in Meaningful Use 2.

When the S&I workgroup came together around automating Blue Button, we looked to V/D/T as the guide. And we looked at it as a way– kind just as the framework for it. So Blue Button Plus and V/D/T, one and the same. Blue Button Plus just provides a few extra things to make it a bit more consumer friendly.

Josh, the next question is, how can we get sample Consolidated CDAs to you if we are an EHR vendor?

JOSH MANDEL: What a wonderful question. And forgive me for not saying that more explicitly, more times. Actually I’ll show you two ways that you can do it. Any time you go to the Scorecard, you’ll see a link at the bottom of the screen, a link about how you can improve the Scorecard. And there’s some really concrete suggestions here.

So one of them is, share your sample C-CDAs. And this will take you to our GitHub repository. So you can, from this GitHub repository, in addition to viewing the sample documents– you can click right here on a link to contribute a C-CDA. And this will just take you to a web form that’ll simply email us. So that’s the simplest way, is you can literally copy and paste a sample C-CDA into this form and submit it that way.

Or you can do it sort of the GitHub way. This is an open GitHub repository. And we encourage you to click the fork button here, which allows you to have your own personal copy of this repository, hosted in GitHub. And you can documents to that repository. And send what’s called a pull request. That says, hey, we’ve got a new document. Do you want to please incorporate it? And we’ll just go ahead and do that. So that’s two easy ways to add new document to the repository.

RYAN PANCHADSARAM: Awesome. And one of the follow up questions from a viewer was, can we use the Consolidate CDA repo to add both real and synthetic data?

JOSH MANDEL: So what I would say, is the explicit goal of this repository is to capture the export pipelines of real vendor products. In other words, to give you a set of Consolidated CDAs that represents what you would find in the real world. So when we talk about real versus synthetic, sometimes that distinction has to do with whether those data actually corresponded to a particular patient and their particular problem list.

So certainly, in that sense, we expect the data here are not going to contain any protected health information. But we do want these documents to be, for example, if there’s a folder here called Cerner, we want that document to represent what a Cerner system would export.

So if you’re talking about synthetic documents, my real question isn’t are they synthetic? It’s what system, what vendor product, or what piece of software exported them? And we’d like to categorize them that way.

So if you have a software product that exports C-CDAs, we’re happy to make a folder for that product. And show examples of the kind of C-CDAs that you export in that folder.

RYAN PANCHADSARAM: Awesome. One followup question came out, and I think it relates to the question of Blue Button Plus and V/D/T. And so the question is, Blue Button Plus and its relation to the Consolidated CDA? And the reason for that is, Blue Button Plus is following V/D/T incredibly closely. And V/D/T specifies the content standard as the Consolidated CDA. And so that’s why, when you hear Blue Button Plus, you see the Consolidated CDA as part of that as well, too.

OK, Josh. Got some Scorecard questions. Is there a recommended strategy for using the Scorecard in the NIST test tools? How do both fit in, I guess, our development cycle?

JOSH MANDEL: Yeah. So these are both important tools and resources. And they both do different and complementary things. So the NIST tools, at the end the day, that NIST validator is the one that your document needs to pass for certification. So they’re a sine qua non. But what they check, in general, is the nested XML structure. Those NIST tools make sure that the nodes are where they’re supposed to be. And they check to make sure that things that should be present are present.

And in general, if things are recommendations, rather than the requirements, the validator won’t give you an error. It might just say you violated a recommendation. So it stays very close to the official specification. And that NIST validator doesn’t know anything about the big coding systems, RxNorm, and LOINC, and SNOMED. So it can’t tell if your codes actually exist in those systems, or if maybe they have typos in them.

So there are some things that the official validator doesn’t do. And for those, there’s the Scorecard. The Scorecard is a bit more opinionated. It actually pay attention to the shoulds, the suggestions, in the Consolidated CDA, in addition to the requirements. And it tries to get out there and say, here’s the things you should be doing for a good document.

And I want to emphasize that not all those things are required. Many of those things you could leave out, and still produce a valid document. The Scorecard is trying to highlight best practice, not strict requirements. So it’s complimentary. The idea is, it’s supposed to run quickly and give you some things to look at in your document.

So when you lose points for something in a Scorecard? If it dings you a point for a medication or a problem? It doesn’t mean that you’re wrong. It just means you should be paying attention to that point and know why it was taken off. And sometimes it’s perfectly fine that the Scorecard dings you a point. It’s really designed to help developers focus on a couple of key issues.

RYAN PANCHADSARAM: Excellent. So Julie had a follow up question on the Scorecard. And it was, so the terminology for quality checks and such, does someone review and validate the rubrics for the Scorecard? Or how is that process being done?

JOSH MANDEL: So we would love to help reviewing and validating the rubrics for the Scorecard. Right now, they’ve basically been written by me. And the way that I’ve written them is, I read through every C-CDA document that I’ve been able to get my hands on. And looked for things that either seemed wrong, didn’t make sense, and then shared those things with the Consolidated CDA community, through HL7’s mailing list. So I would encourage folks who are really interested in the nitty-gritty to subscribe to the Structured Document mailing list on HL7, which is where a lot of these discussions have been happening.

And so when there have been these kind of discussions, or best practices that have emerged, I’ve simply incorporated them into rubrics and released them as part of the Scorecard. So we would love to have more eyes on this stuff. We would love to have validation. I’d love to have people take a look at the particular value sets that we’ve chosen and the particular rubrics that we’ve implemented.

RYAN PANCHADSARAM: And Josh, you made one similar ask to the Blue Button community. Which is on this call, there are a lot of folks who are already seeing a lot of Consolidated CDA files in their systems, whether they’re coming from the outside, or they’re generating them. If you are noticing certain patterns that people are doing wrong? Share those with Josh.

For example, a lot of the rubrics that are in the scorecard came about from Josh seeing a pattern in the number of files. And realizing, well, that’s actually not the right way to do it. And so you’re able to add a rubric or a check for it.

OK, so we have a few more questions here. For the SMART apps example, are they working directly on the Consolidated CDA source files? Or did you develop a C-CDA loader to put the data into Indivo X first?

JOSH MANDEL: So that’s a great question. The answer is, neither of those things. There is a component called the Consolidated CDA Receiver, which you can send documents to, either by pasting them in or through a RESTful interface. And once the documents land in the Receiver, the Receiver parses out a bunch of individual resources from those documents. It parses out medications, and problems, and vital signs, and labs, and patient demographics.

And it makes those resources available through a RESTful JSON API, which we have documented alongside the Consolidated CDA Receiver in the wiki of the GitHub repository. So the answer is, it stands up a standalone RESTful Jason API that it makes available to apps for those data.

RYAN PANCHADSARAM: OK. And we’ll try to get two more questions in before the end of the hour. Are you working with EHR certification bodies to collect C-CDA sample from all certified vendors? Having a robust library is important for the data community.

JOSH MANDEL: We would love to have that robust library. We’ve been attempting to work directly with ONC around making those specific example C-CDAs that are submitted for certification. We would love to collect those and make them available. It’s an idea that we’ve floated with ONC There’s been some support for it, but we don’t have a project under way to actually get those documents.

And if folks have suggestions about how we could obtain them, we’ve thought about simply a Freedom of Information Act request, if nothing else. But we would love to have suggestions about who we could work with to make that happen.

RYAN PANCHADSARAM: Excellent. Who certified the examples that are being distributed for people? I think that might be the C-CDA samples that you’re collecting from Cerner and such?

JOSH MANDEL: The question is who certified them. So let me think about that. The repository that we have is simply a repository where Cerner can contribute documents. And any vendor can contribute documents, and we’ll just host them. So in that sens, those documents don’t need to be certified. It just has to be a vendor who says, this is what I produce. And let me share it with the world.

And the idea is that by sharing it, other folks can see it that know how to interpret it. And build their pipelines so that they’re robust to them. So just because a document is in the repository, doesn’t mean it’s correct or certified. It just means this is what this particular vendor wanted to share with the world. And we strongly encourage folks to do it.

Now, two of the vendors, Greenway and NextGen, which have documents in our repository today– they had a really easy time finding a document to share with us. Because they’d already gone through Meaningful Use Stage 2 certification. And they just gave us a copy of the document that they used in the certification process itself. That ties back into the last question. Anything that a vendor wants to contribute, we’re happy to include in that repository.

RYAN PANCHADSARAM: Excellent. Well, Josh, thank you. We’re almost at the end of the hour. So I’m going to hand it back to you. But before I do that, I know there are few other questions that have popped up in the chat. Please– we’re going to leave the WebEx open for a little bit longer after 1 o’clock. And so if you have more questions, put them in the chat or feel free to send them directly to us. And what we’ll do is, in the blog announcement with the recording and slides, we’ll see if we can address some of those somewhere there. So keep typing questions in the chat and Josh, back to you to close things off.

JOSH MANDEL: That’s great. Thank you so much, Ryan. First off, thanks everyone for attending. I’m really excited that we had such a good turnout. And what I’d say, is just in closing, what we’re going to do for immediate followup is to share this survey link with you via email. And again, we’ll also share with you a link to the recording when it’s available online.

I’m available right now for some questions if folks want to stay on the line, and ask questions. Still probably through chat is the easiest thing, for the next few minutes. But questions that we don’t have time to get to today, we’ll turn into blog posts on SMART platforms. So any questions that we can’t address I’ll just take out of the chat log here and make or answers available through the SMART platforms website.

So with that, thanks to everyone. Anyone who has to drop off now, thank you for attending. And folks who want to stick around, at least for a few minutes, I’m happy to stay here and answer your questions and chat.

RYAN PANCHADSARAM: Excellent. OK, we’ll give you folks who need to drop off. OK, Josh. There are few questions. Here we go.

Someone I think was digging into the Consolidated CDA Receiver, and it gave them the choose a patient screen. But it doesn’t show the list in Firefox or Chrome. Do they need a plug-in or anything? Or–

JOSH MANDEL: No. There may be a bug. So the supported browsers include current versions of Firefox, and Chrome, and Safari, and unfortunately Internet Explorer 10 is the only IE that works with Receiver right now. We should be able to fix that soon. So there appears to be a bug. I’m going to treat that as a bug report. Right now I would say just to try hitting reload. By going back to the playground. And seeing if that helps. But no, you don’t need an extenstion.

RYAN PANCHADSARAM: Excellent. Question from AJ. Is the C-CDA becoming the choice standard over others like CCD, CCR, and C32?

JOSH MANDEL: Yeah. So that’s a great question. And there’s a trick to that question of understanding how all these things are inter-related. So, C-CDA is a set of templates that describe different kinds of documents. Including clinical summaries, clinical notes, different kinds of reports. One of the kinds of Consolidated CDA documents is called a CCD version 1.1.

So in Meaningful Use Stage 1, the primary standard was a flavor of C-CD version 1, which was called C32. Now in Meaningful Use Stage 2, we have this update flavor of the CCD called CC 1.1 And that is a kind of Consolidated CDA. So that’s a pretty confusing taxonomy. The simple answer is there’s a particular flavor of Consolidate CDA called a CCD 1.1. And that is the focus for clinical summaries in Meaningful Use Stage 2.

RYAN PANCHADSARAM: Got it. It would be nice to see style sheet samples. Any plans for this?

JOSH MANDEL: We’d love to collect them. We have one style sheet online right now, which is HL7’s example style sheet. But if other folks are generating style sheets for rendering Consolidated CDA documents? We would be very excited to collect and display them alongside of the documents themselves.

RYAN PANCHADSARAM: We’ve got two questions regarding the apps. So the first one was, “maybe a bit technical,” quotes– but why was the Scorecard app not created as part of the C-CDA Receiver app store and instead made completely separately?

JOSH MANDEL: Sorry. Will you repeat the question?

RYAN PANCHADSARAM: I think the question is, the Scorecard app– why was it made separately?

JOSH MANDEL: Why was the Scorecard written separately?

RYAN PANCHADSARAM: Yeah.

JOSH MANDEL: The idea of the Scorecard was that it could be a totally independent utility, that would be able to run in a hosted web fashion, without you having to buy into the idea of running apps on top of your Consolidated CDA data. So it was built as a standalone app in order to limit dependencies.

RYAN PANCHADSARAM: Excellent. Josh, in the chat, do you think it’s possible for you type in the link to SMART blog? Or actually I can perhaps go do that.

JOSH MANDEL: It’s smartplatforms.org.

RYAN PANCHADSARAM: OK. So let me, doo do doo doo— smartplatforms.org. Let’s see. The other app-oriented question was, are there plans to include apps for terminology translation? Right, like converting native data to RxNorm, LOINC, et cetera.

JOSH MANDEL: So that hasn’t been a focus of ours. Simply because when it comes to translations, and building high quality translations, it’s all about inputs and outputs. So if I want to translate a code from a local terminology into LOINC, for example, I need to be an expert in my local terminology to do that appropriately.

Now there were a number of tools, including some open source ones, to help with that kind of terminology mapping. But ultimately, the way you build those mappings is by taking people who are experts in your local codes and having them use those tools. So we haven’t focused on that except to say, these are activities that will need to happen in order to meet Meaningful Use Stage 2.

So there will have to be experts in every local coding system who are devoting some attention to generating those mappings. But that’s necessarily a distributed thing that’s to have to happen with local knowledge. It’s pretty difficult to build tools that do that in a broadly applicable way. The best you can do is sort of support local analysts.

RYAN PANCHADSARAM: OK. Have state HIEs and Beacon Communities in ONC been engaged to consider how this might be integrated into current industry-wide efforts around interoperability? More specifically systems consuming these C-CDA documents can use to verify the quality of C-CDAs they are about to try to import?

JOSH MANDEL: What a great question. And that would be a very exciting direction. The answer, quite simply, is no. We’re just starting to show these tools off. And we’d love to have folks from the state HIE and Beacon Communities thinking about them. And thinking about how they can incorporate these tools into their processes.

RYAN PANCHADSARAM: Exactly. I think, just to build on top of that answer, I think that’s what this webinar is meant to be. To showcase all the different tools that people have access to and to show that they’re there. And that they’re complements to the official tools and that both are equally valuable.

Let’s see, I think we’ve– OK, actually there is one more question. How do you see FHIR playing in the SMART framework? I think that’s perhaps the last question that we have.

JOSH MANDEL: So, FHIR is a really exciting emerging standard in HL7. Which is a way to represent health data in a resource oriented fashion, with sort of a linked data flavor to it. Where individual resources are defined in terms of readable XML, or readable JSON, for example. Simple resources, with well defined properties, that link to each other.

So it’s a really exciting direction for HL7 to be taking. And ultimately, I think when FHIR picks up and has defined resources for all the clinical summary data, which they’re well on track to do, and moving towards balloting later this year, in 2013. When FHIR has defined all those things, that’ll be a really nice target for building tools that import Consolidated CDAs and other kinds of health data that come over the wire for Meaningful Use Stage 2, and 3, and beyond. It’ll be great to have tools that import data from all those sources and map them to FHIR and are able to expose a FHIR API.

And when we get to that point, really that’s going to give us the opportunity to integrate these web apps in a standards based way using FHIR. Which will be a standard by then from HL7. So I think that’s an exciting future opportunity. The reason, today, why we’re focused on this standalone component is quite simply that we’ve been trying to do some practical that works right now, in Meaningful Use Stage 2. With all the standards that are sort of baked into those regulations.

So that’s been our focus to date. But we’re really excited about a future with FHIR in it.

RYAN PANCHADSARAM: Awesome. To close off, actually, there’s a really good comment that Rachel just pasted to me from someone. And it’s the idea that one can motivate vendor submissions by providing– sorry, the early versions from vendors could– let me just read it word for word, actually. One could motivate vendor submissions if they could provide early versions, and the community could help them with forks to the document. For example, improving sections, code changes, et cetera.

And I think that is, actually, the call to action here. Is that if anyone on the call is from an EHR vendor or is producing Consolidated CDASes, toss them into the pool early. Try the Scorecard early, as well, too. I think it’s going to be an easy way to get feedback on the C-CDA that you’re producing. And in a lot of ways, three feedback on the C-CDA that you’re producing. And ultimately that’ll lead to a beautiful C-CDA coming out of your system.

Josh, I think that’s all the questions. We were able to get all of them. So any other final words before we end this session?

JOSH MANDEL: Oh, was that question for me? Since I’m the only one who can talk. Thank you. No, I’m really excited about people’s interest and glad everyone could attend. And as promised we will send out links to survey and to reporting. Thanks again everyone.

RYAN PANCHADSARAM: Thank you everyone. Have a really good rest of the day.

JOSH MANDEL: Bye bye.