[This Transcript is Unedited]

Department of Health and Human Services

National Committee on Vital and Health Statistics

Subcommittee on Standards and Security

September 22, 2005

National Center for Health Statistics
Auditorium A & B
3311 Toledo Road
Hyattsville, MD 20782

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway, suite 180
Fairfax, Virginia 22030
(703) 352-0091

TABLE OF CONTENTS


P R O C E E D I N G S [8:45 a.m.]

Agenda Item: Call to Order – Welcome and Introductions – Mr. Blair and Mr. Reynolds

MR. REYNOLDS: Good morning, my name is Harry Reynolds and I am with Blue
Cross and Blue Shield of North Carolina and co-chair along with Jeff Blair of
the Subcommittee on Standards and Security of the National Committee on Vital
and Health Statistics. The NCVHS is a federal advisory committee consisting of
private citizens that makes recommendations to the Secretary of HHS on health
information policy. On behalf of the subcommittee and staff I want to welcome
you to today’s hearing on secondary uses of clinical data. We are being
broadcast live over the internet and I want to welcome our internet listeners
as well.

As is our custom we will begin with introductions of the members of the
subcommittee, staff, witnesses, and guests. I would invite subcommittee members
to disclose any conflicts of interest. Staff, witnesses, and guests need not
disclose conflicts. I will begin by noting that I have no conflicts of
interest. Jeff?

MR. BLAIR: Jeff Blair, Medical Records Institute, co-chair of the
subcommittee and there’s no conflicts of interest that I’m aware of.

DR. HUFF: Stan Huff with the University of Utah and Intermountain Health
Care in Salt Lake City. I don’t think I have any conflicts but just to disclose
that I am a vocabulary co-chair of HL7 and also a co-chair of the LOINC
committee, but I don’t think we’re discussing anything where those would be a
conflict today.

MS. GOVAN-JENKINS: Wanda Govan-Jenkins, NCHS, CDC, and staff to the
subcommittee, and I don’t have any conflicts of interest.

DR. WARREN: Judy Warren, University of Kansas School of Nursing, member of
the subcommittee, and I have a potential conflict because I am on the editorial
board of SNOMED.

MS. FRIEDMAN: Maria Friedman, Centers for Medicare and Medicaid Services,
and lead staff to the subcommittee.

MS. ZIGMAN-LUKE: Marilyn Zigman-Luke, America’s Health Insurance Plans.

DR. CIMINO: Jim Cimino, Columbia University.

MS. VERTANE(?): Laura Vertane with Wexler and Walker for IMS Health.

MS. BOYD: Lynn Boyd, College of American Pathologists.

MS. JACKSON: Debbie Jackson, National Center for Health Statistics, CDC,
staff.

MR. REYNOLDS: Justine, are you on the phone yet? Is anyone on the phone?
Okay, Justine Carr was going to call in, that’s why I was checking. Okay, Stan,
we appreciate you handling this portion of the program so if you want to
introduce you can.

DR. HUFF: Well the whole topic basically is just related to I think a vision
that has been a part of medical informatics for many years and also is noted in
earlier reports from the NCVHS committee that a lot of what we would like to
accomplish is dependent upon capturing detailed clinical data at the bedside
and then being able to use that information for decision support, quality
assurance, automated billing, all of those kind of things. And while we’ve been
making steps towards that we certainly haven’t realized that vision yet in sort
of the practical everyday part of what’s happening in U.S. medicine.

And so we’ve convened these hearings to discuss the issues of how is that
coming, what are people doing, what do we need to do to provide incentives or
encourage that, what’s the potential gain for the country if we actually
realize that vision, that sort of thing. And so I know that SNOMED is one of
the terminologies in other work that you may be doing that would be the basis
in fact of capturing that detailed clinical data and so we wanted to get your
perspective and ideas and have some open discussion also, we hope you can stay
and we can discuss after Jim has presented as well.

Kent is a professor and he’s a pathologist by training so he’s got to be a
good guy, and also the editor of, or the head of the SNOMED editorial board.
And so without further introduction —

MR. BLAIR: Stan, I have just have one question about the introduction that
you were making, you mentioned capturing information at the bedside, is that
intention that you intended to limit the scope to capturing information only
within acute care institutions?

DR. HUFF: No, that was meant to be basically a point of care, so whatever
environment that’s happening. Thanks for that clarification.

Agenda Item: Secondary Uses of Clinical Data – Dr.
Spackman

DR. SPACKMAN: Thank you, Stan, it’s a pleasure for me to be here and to give
you some of my ideas, I could probably go on all day on this but I’ve picked
about 45 minutes worth of information basically to give you what I think are
some of the high points and to describe what I think needs to happen next, so
here’s what I’m going to talk about over the next few minutes. A little bit
about definitions of what we mean by secondary uses of clinical data, some
examples, in particular the Clinical Outcomes Research Initiative, National
Endoscopic Database as an example of something that I’m familiar with that some
of you may not have been aware, that I’ve been involved in through my other
activities at OHSU. And then something about standards required reference
terminologies like SNOMED CT, and then data collection, data collection
protocols and checklists, and then some of the dilemmas and challenges that we
face because we’ve made a lot of progress but as Stan said we’re still not
quite where we want to be.

Definitions, usually when we talk about secondary clinical data we’re
talking about data that’s used not for primary direct patient care so secondary
use is anything beyond primary. But I would like to take a slightly different
perspective on what we might want to mean by secondary uses. And first just
think about why clinicians record data, if you think about a family
practitioner and their record often what they’ll do is jot down a few notes to
aid their memory, so it’s really a way of keeping track in their own minds of
what they’ve been doing. And then legal documentation is another purpose and
probably can be regarded as a primary purpose for many clinical documentation
artifacts.

And then we want to communicate to other members of a health care team so
very often progress notes are written so that you can communicate to the next
group. And to support and justify reimbursement is another purpose and so you
might say well which of these purposes is primary, and I think it depends on
the individual instance of when people are recording, what their primary
purpose is.

And then there are increasingly clinical documentation activities, and I
think nursing in particular is burdened with these, I look at Judy, she nods,
for research protocols, minimum datasets, professional guidelines, and then
increasingly with the use of software there are some incidental constraints
imposed by software that caused people to record data and the primary purpose
for it is to either work around or work with software.

So if we think about all of those as purposes for which people record
clinical data then secondary uses would be uses of the data that maybe don’t
take account of those primary purposes. So maybe instead of focusing on direct
patient care we might say that secondary uses of clinical data are any uses
other then the primary purposes for which the data is recorded. So ICD-9-CM
reimbursement coding can be derived from the dictated discharge summary, where
the primary purpose there might have been documentation plus or minus
communication but not necessarily reimbursement, so there I would say if the
hospital department that’s responsible for billing looks at the discharge
summary and derives a code for billing that’s a secondary use.

And likewise communicable disease reports to the health department can be
derived from routine lab cultures and there the primary purpose is
communicating from the lab to the ordering physician and so secondary purposes
would be public health purposes.

And that’s almost the same distinction but I think it’s important to focus
on why people record data because that gets to the root of what kind of data
we’re going to get. And of course the ideal is record data once with fidelity
to the clinical situation and then allow systems to derive needed data from
that single instance of recording. And I think that ideal tells us there’s a
way to get the data that we need without adding a lot of extra burden to the
clinician.

In real practice though if you look back at that list of reasons why people
record things it’s not entirely clear that you will always get the data that
you want from a particular configuration of purposes for recording data and so
I think there has to be attention paid to the purpose for which data is
recorded and hopefully we can move toward this ideal. But the reality is
clinicians often find themselves entering data for multiple different reasons
and sometimes the same data from two different perspectives requires two
different approaches to recording it.

And the other thing that’s happening that I’ve anecdotally heard from people
and I’ve experienced myself is that reimbursement coding skews the data
somewhat, in other words the level of detail is tuned to optimize the purpose
for which it’s being recorded, so I might want to say that the clinical
situation is a massive transfusion in the case of a motor vehicle accident but
the reimbursement coding gives me options like DIC and trauma not otherwise
specified or something like that. And so the level of detail and the type of
information that I’m recording is going to be altered by the constraints or the
possibilities that I’m given to record it. And so sometimes clinical reality is
obscured by the lack of fidelity in the coding options that are available.

There are some examples where this secondary use is actually beginning to
work. I remember attending a conference at the CDC in Atlanta where we heard
reports two or three years ago I think of where Hawaii was beginning to have
automated reporting of its microbiology results from the two or three
laboratories that did infectious disease testing to the state department of
health. And they found a real tremendous decrease in lag time between the
detection of a case and reporting and also an increase in coverage, they got a
lot more cases of the diseases that they were interested in. So that type of
reporting that’s occurring on an automated basis from laboratories is beginning
to work.

There’s another example of where secondary use of data in this instance for
clinical research is successfully happening. There is an initiative that was
started by a researcher who’s at OHSU but under the auspices of the American
Society for Gastrointestinal Endoscopy, and it was started in 1995, they called
it the Clinical Outcomes Research Initiative but it was focused on endoscopy,
and eventually they called their shared data repository the National Endoscopic
Database, this was supported by grants from the NIH. And it’s receiving more
then 20,000 clinical procedure reports from more then 750 endoscopists each
month at the current time, so it’s a very large scale from the perspective of
endoscopy. And CORI research data have been used to support more then 50 major
research initiatives, it operates as a not-for-profit organization under the
auspices of ASGE and they have a website if any of you want to look it up, it’s
www.cori.org.

The way it works is they provide free software to endoscopy practices and
that allows them to record and print out a report of the endoscopy. And the
data is saved to a local database so they can look at their own cases and they
can also then get software support free from the consortium. And what they do
then is they agree to send the procedure data to the NED, it’s de-identified so
the individuals and the practitioners are de-identified so there’s no security
or confidentiality and privacy problems there, and then the data are tested for
their quality for adhering to particular standards for data, and then the
aggregate data are used for a variety of research purposes.

So over 750 endoscopists at 114 sites, and they’ve now got an aggregate of
over a million endoscopy procedures, so there’s a lot of really valuable data
there. The most important procedures are colonoscopies, EGDs, and flexible
sigmoidoscopies, that’s about 95 percent and there’s some others as well, ERCP,
EUS, motility, bronchoscopies.

So there’s some lessons that I think the CORI group learned, I was on an
advisory committee for five years for this particular group and so I had some
direct interaction with them on how they went about accomplishing this task.
And one of the lessons is that clinicians will accept certain constraints on
their data recording if they get something in return, and so in this case they
get automated reports, they can be sent to the referring physician sooner,
relative to transcription, dictation and transcription, and it doesn’t require
a duplicate effort for billing, reimbursement, and medical legal documentation.
The reports can include images and they have a standardized interface that all
these endoscopists use, it’s the same software interface, and because it’s the
practitioners themselves who are designing the software they make it work for
endoscopist reports.

And what they get out of this of course is the free software and also the
sort of altruistic contribution to a specialty focus database that advances
their profession and provides the availability of pooled data for research, so
those who contribute to the data also can access the data for their research
purposes.

Another lesson from this particular effort is that the successful research
data analysis depends on the quality of data collection, so there are some
fields that they found were allowed to be optional and the experience was that
they didn’t get a large enough proportion of these fields to be filled in to
make any kinds of generalizations, so one of the fields was ethnicity and they
used the federal set of ethnicity categories, and it was not filled in in such
a large proportion of the cases that they really couldn’t make any
generalizations about whether ulcers of a particular kind were more common in
one ethnic group versus another, those kinds of inferences that they wanted to
make. And so with the software they could say okay, you can’t generate your
report unless you fill in this field, so they made that change and of course
then obviously there is an override but it’s much harder to make and of course
the completeness of the data went up well above 95 percent and then they were
able to make generalizations about that data.

So one of the lessons here is that unless you have some constraints and some
way of getting people to answer particular questions you won’t get enough data
to really make any useful generalizations from it.

And the second lesson I would take from this is that the data collection
needs to be crafted and tuned for the individual practice case because
endoscopists designed and refined the software, they made it work for their
particular case. And it’s unlikely that generic software would actually solve
the problem nearly as well.

There’s also a sort of downside lesson and one of the downsides is that the
NED data is really a silo, people have looked at it from the perspective of the
MST, the minimum standard terminology for gastrointestinal endoscopy, and
there’s been some attempt to map the CORI terminology and data to that. And
there’s also been an attempt to link to SNOMED but until it is actually linked
into a terminology that allows you to connect to other people who are
collecting this kind of data that data remains as an isolated silo.

So that leads me into talking about terminology standards which obviously
because of my work with SNOMED CT I’m very interested in, and a standard
terminology should enable us to solve these kinds of problems, it should enable
us to interoperate across these different silos of data by providing us with a
mechanism for representing the meaning of clinical situations using codes and
the interrelationships between those codes. It also provides, I think this is
important, I’ll emphasize it with some examples, provides a mechanism for
tracking the history of codes over time so it sustains the value of previously
recorded data.

Standardized reference terminology though doesn’t solve by itself the
problem of data recording, of getting appropriate data collection and data
entry, and that’s where I think a lot of additional work is still needed.

So SNOMED clinical terms specifically directly supports representation and
queries that are based on meaning, and computable tracking of historical
relationships, of retired codes, and it indirectly supports a specification of
the user interface, definition of minimum datasets, checklists, and data
collection standards.

So I’ll give some examples that illustrate what I mean by these points.
Independent of the natural language that is used, you can use English or French
or Spanish or German, you still have the same code that represents the same
meaning. And independent of the particular term that you use, an example I
often give is the pyogenic granuloma, most people look at that term and they
say well pyogenic means pus forming and of course a pyogenic granuloma doesn’t
form pus, and granuloma means a certain kind of inflammation and of course a
pyogenic granuloma is not really a granuloma, so a pyogenic granuloma is
neither pyogenic nor granuloma. And yet we use that term.

We also have a common representation across information systems, across
interfaces, across implementation sites, across types of site and types of
user, so that’s what the purpose of the reference terminology is.

So take an example of a query or a decision support rule for example, that
the patient has had an MI but no congestive heart failure, AV block, asthma,
peripheral vascular disease or core type I diabetes, and is not taking a beta
blocker, you need to consider adding beta blocker therapy. Well all of those
concepts like congestive heart failure and AV block and asthma and peripheral
vascular disease, those are all represented by individual codes in the SNOMED
system and you can write a rule that would be represented using those codes.
And then if those codes are linked to the way that data is recorded in the
patient record you can run a rule like this automatically and identify whether
the rule should fire or not fire.

We sometimes refer to these as the reference properties of the terminology,
in other words we provide formal semantic definitions of the codes and this
gives the relationships between the codes. And those relationships are labeled
with labels like ISA(?) that shows subtype, super type, hierarchy, or finding
site that indicates the relationship between a finding and its anatomical
location, or the causative agent that might indicate the relationship between a
disease and the organism that caused it, or the associated morphology that
might show the relationship between a particular disorder and the pathologic
lesion that is characteristic of it and so on. So that’s what the formal
definitions are all about.

And that would enable us to answer without knowing in advance that these two
things are related, whether tuberculous ascites is a kind of bacterial
effusion, let’s say that you were looking at the effectiveness of antibiotics
and you wanted to know whether antibiotics that were used to treat bacterial
effusions were as effective as those that were used to treat other kinds of
lesions, and so then you have a case where the code that’s reported is
tuberculous ascites and you say well does that fit my data query or not. And by
looking at these relationships to say that the tuberculous ascites would
involve the peritoneal cavity and have serous effusion as it’s morphology and
the causative agent as a bacterium you’d be able to say well a serous effusion
is a kind of effusion and mycrobacterium tuberculosis is a kind of bacterium so
this meets the criteria for a bacterial effusion and it would fit the query.

Now I’m going to shift gears a little bit and talk about an additional
feature of the clinical terminology that we often don’t emphasize but I think
is as important from the perspective of secondary uses and that is historical
tracking of what happens to codes. And so I’d call this the first rule of
coding, if all you’re doing is typing in something for somebody to read then we
don’t have to worry about this because people can still read whatever it is, as
long as its legible you can read it. But as soon as you put a code to something
presumably you want to be able to use that code and use that meaning forever
after as long as that data is relevant. And so the first rule of coding is that
yesterday’s data should be usable tomorrow, if we don’t want yesterday’s data
to be usable there’s no point in assigning codes.

So let’s talk about a secondary use which is cancer registries, cancer
registries collect cancer data using, and they code and classify it using the
International Classification of Diseases for oncology. And in 2001 there was a
version change from version 2 to version 3 and this was of course necessary
because there’s been a change in the way people look at different diseases and
in particular the leukemias and lymphomas when through a significant change in
that transition.

Now when I was in training not that long ago but sometime ago we had the
French-American-British Classification of leukemias and for the myologenous
leukemias that has carried forward but for the lymphatic leukemias it really
hasn’t. But we had three subtypes, L1, L2, and L3 and there were ICD-0-2 codes
for these, for L1 it was 98213 and L2 it was 98283 and L3 98263. And over time
there was a change in the way that people classified acute lymphoblastic
leukemia and so L1 and L2 really aren’t important anymore but presumably there
are a lot of cases out there that have been recorded using these codes.

So what happened was that L1 and L2 when you go from ICD-0-2 to ICD-0-3 L1
and L2 got merged into what’s now called precursor B cell lymphoblastic
leukemia and that now has a morphology code of 98363. So in ICD-0-2 those first
two, the 21 and the 28 are now no longer valid codes for recording and using
for secondary data. On the other hand L3 has carried forward and is now
referred to as Burkitt cell leukemia. In the SNOMED history table we have two
rows that indicate this change and I’m actually, there are three elements of
these two rows, there’s the code on the left, the relationship type, and the
code on the right. Now if you look in the actual table there will be integers
there but those integers can be found to be in rows in the concepts table that
are accompanied by the code and the name, so 98213 was replaced by 98363 and
those historical relationships are kept in the SNOMED history table and we
think this is a key element of tracking codes across time.

Now let me talk about what I think is maybe not the first rule of coding but
the first rule of data quality and that is that, I may be quoting somebody else
here, I haven’t identified where this idea came from so I’ll claim it for mine
until somebody tells me otherwise. But the quality of the data collected is
directly proportional to the care with which options are presented to the user.
And I really think that this is one of the key lessons of the CORI project and
it’s also a very important lesson that we need to learn about secondary uses.
If we’re going to make secondary use of data we need to pay attention to the
options that are being presented to the user.

And there are really two basic parts from my perspective of how you specify
what should be selected. The first one is the required elements, what is it
that you want to see collected, and the second one is how those elements should
be described or coded. So as an illustration of this and this is another thing
I’ve been involved with is the CAP cancer protocol, the College of American
Pathologists has spent a lot of effort organizing a series of protocols that
specify both the required data elements and the way they should be recorded and
coded for cancer cases because that is a key element of pathology practice.

So if we take the non-Hodgkin’s lymphoma CAP cancer protocol it specifies
these five key elements that need to be collected in every case, so if you’ve
got a case of non-Hodgkin’s lymphoma you should record and transmit information
about all these five elements. The specimen type, the tumor site, the
histologic type, the extent of the pathologically examined tumor and
phenotyping. And its actually become a standard so that if you’re going to be
accredited by the American College of Surgeons as a cancer center you need to
make sure that your cancer reports, your pathology reports at least report
these key data elements.

Now the second part of the specification besides the key elements is how do
you record those, and of course histologic type is a very important component
of a lymphoma report but the classification of lymphoma’s has been changing for
a long time, maybe that’s because we still haven’t quite got it exactly right.
But there have been more then 25 different classifications that have been
published since 1925 and at least five major classifications in the past 30
years, including the Rappaport, the Working Formulation, the Kiel
classification, the Revised European American Lymphoma classification, or the
REAL classification, and more recently the WHO classification.

Well here’s a section of the SNOMED CT hierarchy and it may be a little bit
too small to see but just to sort of give you a sense of the amount of detail
that’s in there. The ones that are in bold are in the WHO classification, the
ones that are blue and underlined are in the REAL classification, if it’s bold
and blue and underlined then it’s in both. If it’s not bold and just black then
it’s in neither. So you can see, I’m drawing your attention here to Burkitt
cell leukemia and Burkitt lymphoma, if you’ll recall my prior example about L3
acute lymphoblastic leukemia that’s now Burkitt cell leukemia, and you’ll
notice that Burkitt cell leukemia as such doesn’t exist in either the WHO or
the REAL classification of lymphomas but it’s a subtype of Burkitt
lymphoma/leukemia which is in the WHO classification. But in the REAL you just
have Burkitt lymphoma. That’s just one little detail.

Now if you were to present this entire hierarchy to a clinician and say here
go ahead, use SNOMED to record your lymphoma classes they wouldn’t view it as
very helpful, they want a selected set, they want to say well I’m collecting my
lymphoma cases using the WHO classification so I only want to see that subset.
So if they were collecting using the REAL classification this is the smaller
set that they would see and there you see Burkitt lymphoma. But if you were
collecting using the WHO classification then you’d get Burkitt
lymphoma/leukemia and then four subtypes there which would be the endemic,
sporadic, immunodeficiency associated and atypical. So for a particular data
collection task we need to subset that big collection of concepts even though
in order to do analyses of prior data we need all those codes that were used
for prior data.

So if you look at the full SNOMED hierarchy as opposed to the ICD-0 codes as
opposed to the REAL classification or the WHO classification you get different
perspectives on a large set of codes. Some distinctions are now much less
important clinically, the lymphoma/leukemia distinction is now pretty much
agreed for many of these cases that the leukemic manifestation is really just
one phase of the same disease, so these are different phases of the same
disease, especially for B cell malignancies. And the distinction between
non-Hodgkin’s lymphoma and Hodgkin’s lymphoma is now probably less important
but other distinctions are more important like the B cell versus T natural
killer cell distinction, that’s a much more important one from a clinical
perspective.

Let me just tweak you one little distance further, you might think if we got
a WHO classification we’re okay but now if we suddenly switch perspectives and
say I’m now a dermatologist and I’m only concerned with primary cutaneous
lymphoid malignancies, well now there’s yet another classification, only some
of which appear in the other classifications, so if you look at the EORTC
classification, which is a dermatology based classification of primary
cutaneous lymphomas, you get another set of malignancies only some of which as
I said are in the others.

The EORTC ignores the non-cutaneous forms, it lumps some primary cutaneous
forms that are split in the WHO classification, so for example there primary
cutaneous follicle center cell lymphoma includes the primary cutaneous forms of
these other three, so again the data collection if you’re a dermatologist is
going to be on a different basis then if you’re a general pathologist looking
at other data.

And many of these classifications also talk about only wanting to represent
real disease entities, sometimes I wonder exactly what they mean by, what’s a
real disease entity and what’s not, but what they mean by that is they want to
reject differences that are based solely on incidental manifestation patterns
and the phase of disease progression, but if you look at ICD-0 it’s making
those distinctions, it’s saying that the leukemic phase of Burkitts and the
lymphoma, Burkitt’s lymphoma, are represented by two different ICD-0 codes, so
from a clinical perspective that distinction is now not that important from a
secondary use data collection perspective, the distinction still remains of
value.

So here you have Burkitt cell leukemia and Burkitt lymphoma and of course
the recent WHO classification says that Burkitt cell leukemia is just a
leukemic phase of the same disease. You might say well which disease are we
talking about here, is Burkitt lymphoma a subtype of Burkitt leukemia lymphoma,
that’s how we got it organized in SNOMED and is Burkitt cell leukemia also a
subtype of that more general one, and when people collect data are they going
to just say Burkitt lymphoma/leukemia and not specify whether it’s the leukemic
phase or not and how will that effect our cancer registry data, is that going
to be important for us.

So these are the kinds of questions that arise when we have primary data
being collected by the clinician with their perspective given current knowledge
patterns and secondary purposes that are based on other classification systems.

So you can see here that if you look on the right hand side the B cell
neoplasm classification would have this general class of Burkitt
leukemia/lymphoma and the lymphoblastic leukemia classification of before would
have mature B cell ALL linking to Burkitt cell leukemia. This is just to
reemphasize the fact that a well defined reference terminology will tell you
that mature B cell acute lympoblastic leukemia is a kind of Burkitt
lymphoma/leukemia, that relationship is there so that if we need that for data
analysis down the road we can accomplish that. Going the other direction, if
you don’t collect that more detailed data you can’t specialize from
lymphoma/leukemia down to the leukemic specificity.

So what does this mean for data collection? It really means that we need to
organize the codes and names that we present to users according to a single
coherent view and this should be current and specific to specialty and context
so presenting the entire SNOMED hierarchy is likely to cause confusion and
that’s not what we intend. And just a reminder, there have been 25 lymphoma
classifications over the past 75 years, stability should not be expected.
Molecular research is revolutionizing our understanding of lymphomas, but
yesterday’s data should be reusable tomorrow as much as possible. So that’s the
task of the reference terminology, one of the main tasks.

So there are some dilemmas that require our attention, the value of
secondary data accrues mainly to parties other then those who collect it, and
we have to find a way of closing that loop with some kind of feedback, that’s
what’s happened with the CORI project. The feedback loop occurs in a very
direct fashion there because when the people who want to collect the data need
a particular data element they have a way of specifying to the people who’ve
agreed to work with them, we want this particular data element filled out all
the time. Now the user community could rebel and say we’re not going to do that
and I think the data collectors would listen, but there is a very tight
feedback loop there, there’s a feedback loop that says here’s the data that we
need and here’s a mechanism for you to collect it. I think that’s a very
positive thing, I’m not sure how to generalize that.

And the second dilemma is that while the value of secondary data depends on
its quality the quality is directly proportional to the care with which it’s
collected, and that gets back to this whole question of what is the purpose for
which people are collecting data and what are the supports and mechanisms that
we provide to them for collecting the data.

And there are some data quality questions that really need attention, who’s
responsible for defining professional standards of data quality? Who decides
that if a patient enters the emergency room with a head injury that you should
always ask whether there was a loss of consciousness? I think there should be
some kind of professional standard surrounding that kind of data collection
requirement, what is the minimum set of data that you should collect about
every head injury patient? There should be some kind of effort on the part of
professional bodies to define these kinds of standards the way the CAP has done
for pathology specimens for cancer. I think that’s a task facing all of our
professional societies.

And then what clinical data is really essential and can HHS help coordinate
data needs so clinicians are not overburdened, so they don’t have to record the
same thing in multiple different forms, multiple different perspectives?
Anecdotal experience that I’ve heard about from the National Health Service in
the UK is that there are so many minimum datasets being generated but all about
the same kinds of things that people are beginning to have to answer the same
question, if you say how many different ways are there to ask about whether a
patient is a cigarette smoker or not, there are 17 different ways to ask and 17
different forms to fill out and if every minimum dataset gets stuck in a pile
on the practitioner’s desk those answers are not going to be forthcoming.
Either that or we’re going to have to hire two or three data clerks to help
every clinician to answer all the forms.

And then finally how can support and incentives be provided to clinicians to
provide this data?

And with that I will stop and be happy to take any questions or comments.

MR. REYNOLDS: Okay, Dr. Spackman, thank you. Justine, since you’re not here
and I can’t see you raising your hand, did you have any questions?

DR. CARR: No, I’m listening with interest but no, I don’t have a question
right now.

MR. REYNOLDS: No, that’s fine, I just wanted to make sure we didn’t leave
you out. Any members of the committee?

DR. HUFF: Thank you very much for the presentation and certainly some
important issues raised. As you know the primary purpose of this committee is
to advise the Secretary of Health and Human Services on things that we think
could improve the efficiency or cost or quality of U.S. health care so in
regard to this, these issues, what are the things that we should be thinking
about in terms of making recommendations to, what would enable this, what
would, what sort of things do we need to suggest that the government, HHS in
particular, could do that would provide the benefits that we’re hoping to get
from secondary uses of data?

DR. SPACKMAN: I don’t have really specific recommendations for you but one
of the areas that always comes up is the coordination between recording
clinical data and submitting that information for reimbursement, and I think
one of the things that I’ve heard anecdotally from people who are practicing
primary care physicians is that people are beginning to become familiar with
individual ICD codes and they’re beginning to think actually, they have
discussions in terms of which ICD code is applicable in this case.

Now maybe that’s okay but alarm bells start going off in my mind when I hear
that sort of thing start to happen and I think that’s something over which HHS
has a lot of control and so I think we want to continue the path we’re going
down of mapping our reference, clinical reference terminologies to the
reimbursement terminologies. And we probably want to continue to make sure that
that, the outstanding issues there for making those mappings automated will
continue to be supported.

But I think down the road, I mean thinking more long term it would be, I
think it would be very important to have a tighter integration between what
people specify from a clinical standpoint with fidelity to the clinical
situation and then how that gets submitted for reimbursement. I think that
right now those are two very different and relatively unconnected streams and I
think they can be better integrated.

MR. REYNOLDS: Stan, did you have another?

DR. HUFF: I’ll let Simon go and then I’ve got some follow-up questions.

DR. COHN: Well first of all Kent it’s great to see you again and thank you
for reminding me about how wonderful the cancer hierarchies and diagnoses area
is, that was actually a wonderful sort of walk through of an area where, I mean
I appreciate your pathology background as well as obviously Stan’s.

I guess I wanted to follow-up a little bit, I think you addressed one of the
issues that I was thinking about as you were talking which is of course that
secondary uses of data depend on somebody getting that data onto the record in
the first place and that’s not a trivial task, especially with people being
busy in their clinical practices, I don’t think anybody should realistically
expect that somebody is going to faithfully reproduce what they did for the
time of their encounter because it would take at least as long as that
encounter and probably more so there’s always some abstraction and some
synthesis that goes on. And so the question is is making sure that at the end
of the day that there’s the right things there that then can be reused and you
began to look at that, I think your comment was well these specialty societies
need to help us with that.

I guess I would ask you potentially that question but the other piece is
there are many reasons for as you described secondary uses of data which you
describe as anything other then the primary use being clinical care, clearly
reimbursement is always a good use case and tends to capture people’s
attention. I know you have some quandaries about specificity and how all the
clinical versus administrative all fit together, do you think that the mapping,
I guess I’m trying to think of how we’re going to get to that end point and it
seems to me that mapping may be a piece, making sure we got the right data
there is clearly a piece, and how do we make sure that that continuum actually
works.

DR. SPACKMAN: In the near term I think a rule based mapping, something that
can be logically consistent and operate essentially the same way every time, I
think that would be a big improvement because then you can basically say if you
record the data, record the same data you’ll get the same mapping every time
and it will be predictable, and then you don’t have to worry so much about
exactly which code is getting assigned.

There are two perspectives on how you assign codes, there’s the perspective
of what’s true about the patient at any given time, so it doesn’t matter if you
record multiple things about the patient, they’re all true, they may have
pneumonia, congestive heart failure and diabetes all at the same time and so
all of those things are true. But when it comes to coding for sort of
classification or epidemiologic or reimbursement purposes often what you’re
saying is okay now I have an episode that’s been concluded and now I want to
say well what’s the primary discharge diagnosis and so you take everything you
know about that particular case and sort of summarize and then you want to put
it into one particular category and double coding is a bad thing, or double
counting is a bad thing.

So there will always be that separate, those two separate perspectives but I
don’t think they have to be, I don’t think those two separate perspectives have
to be completely disjointed from each other, it’d be nice if they could be much
more tightly integrated.

So for example when we’re talking about long term, ten years down the road
when WHO comes out with ICD-11, ICD-11 should probably be based on SNOMED and
there should probably be a very tight integration between the two, that’s the
way I look at it. And so you don’t have this conundrum of well what are the
different inclusions and exclusions from a book perspective and have it all
pass through an individual’s head to try and make some sense of things but you
basically have an engine that tells you okay if these particular facts are in
the record then here’s the class that I want that particular case to go into.
And it can be much closer to an automated and consistent mechanism for doing
it.

DR. COHN: Can I ask a follow-up on that one? Certainly I’m buying your
vision though I guess the one question I would have because knowing, I mean as
you showed with your cancer hierarchy discussions the devil’s always in the
detail to put it mildly in this area. But is the vision you have, and this is
obviously an issue that we have with mapping going forward, at least from my
view this lack of industry accepted CMS approved mappings between
administrative and clinical terminologies is probably at least in my view the
single biggest barrier to adoption of any of these clinical terminologies. I
mean to have to go through afterwards of an entire process to do a reimbursable
code or something that they need for administrative purposes is a barrier to
any thoughtful person.

So you talked about the future of this tight integration, is this all going
to be managed as a single unit so that everything stays in coordination or does
this start off ICD-11 day one everything in coordination and then you have the
SNOMED editorial panel making its changes and you have the WHO panel making its
changes? I mean you know how mappings change over time, what is your vision of
the sort of future state where everything really works? Once again the devil’s
in the details here.

DR. SPACKMAN: Well the coordination obviously, if the classes are determined
by a rule based specification that drives off of the clinical terminology then
what is required is interaction between the individuals who define that
classification engine and the individuals who add to the clinical terminology.
So that would require some kind of ongoing update to both. There also needs to
be some stability, right, I mean it has to be, there has to be an easy to use
component of it as well and so the rules, I would see the rules engine as being
something, the rules specification as something that changes less frequently
then the clinical terminologies so in terms of updates and changes you change
your clinical terminology frequently and add to that but your classification is
at a broader level of generality and a slower pace of change and so I think
it’s possible to mediate between those two on a temporal basis, responding to
the need for rapid changes in clinical practice at the terminology end and
responding to gradual changes in the broader classes that people are interested
in.

The other possibility is also that there could be, if you have a rules
engine that gives you your epidemiologic classes you could have more then one
of those derived off of the same data potentially, so that would raise the
possibility of saying if I’m really interested in infectious diseases I want a
finer grain classification and in particular in areas of the world where
there’s a need to really get into a finer level of detail about those things
then you could explode out the classification and in other areas you’d leave it
less exploded. And of course that’s sort of the basis of the 5th
digit in the ICD classification today but you could do it on the basis of a
rules engine.

MR. REYNOLDS: Okay, Jeff?

MR. BLAIR: Kent, I’m trying to think of how I could maybe get at some more
global things because I don’t know the technical details of SNOMED and the
coding and the clinical practice pieces here but if we’re at a time sometime in
the future where we have electronic health record systems and they have the
ability to capture information using SNOMED CT and LOINC and RxNorm, and if we
wind up in that environment indicating to clinicians, nurses and physicians
that they’re to capture information focused primarily on improving patient
care, patient care quality and effectiveness.

And there’s a study done and the study winds up saying I’m just going to
take that data just as is, that was captured using these clinically specific
terminologies at the point of care for patient care purposes, period. And the
study winds up revealing that I could take that data, and I’m just throwing out
percentages here, the study hasn’t been done, that 70 percent of the
information required for reimbursement purposes can be met with that data
without asking the clinicians or health care providers for any additional
information. And for public health, 50 percent of the public health information
can be met without asking for any additional information and maybe clinical
research, maybe it’s down at 30 percent of clinical research needs can be met
without asking for any additional information, that would obviously be a
tremendous savings to the health care information infrastructure even though it
falls short of the ideal of the vision of having clinically specific
information captured once at the point of care and then having it available for
secondary uses for reimbursement and clinical research and public health.

So kind of the question that I’m asking, once that type of information might
be available you could almost see how the federal government and other
foundations would wind up saying we could save so much, we need to accelerate,
we need to support the ability for this to happen and we’ve got the National
Library of Medicine going ahead and making the mappings and the mappings are
facilitating this, but there’s two questions that they might have and this is,
I’m not even sure you can answer these questions but this is what I’m
pondering.

Number one, do you have any thoughts on how we might construct the
evaluation that would lead us to percentages like that? That’s number one. And
number two, once we have those, that kind of data and we wind up saying if it
turns out that 70 percent of the reimbursement needs can be met by the
clinically specific information, what incentives are applied in that case to
meet the gap or to get health care providers to move more quickly adopting
electronic health records that could provide these systems versus for clinical
research, we may have different answers for reimbursement versus public health
versus clinical research.

So at least you sort of know where my mind is going on this and in terms of
we’d like to be able, NCVHS would like to make recommendations in this area to
move the ball forward towards that world where we could address those
questions, so in that context do you have some suggestions for us to get closer
to that environment?

DR. SPACKMAN: I’m not sure these are concrete suggestions but just some
observations about where we’ve been. There has been over the last several years
a project with the CDC and the CAP to look at how to get pathology reports
coded using SNOMED and then transmitted in a standardized fashion so that they
can be used for cancer registry reporting, so that research is ongoing, I think
there’s experience there, and similar kinds of projects I think are needed with
other specific types of reporting. So we were looking specifically at cancer
registries and their reporting needs, I think you could take any other
secondary use and identify a particular constituency who can provide that data
and then do some research evaluation studies similar to this one with those
other particular data in mind, I think that would be a very useful way to go.

And then in terms of incentives, again I think the experience from the CORI
project is very instructive, what was the incentive that those endoscopists
required, well they got a little bit of free software, they had to buy their
own hardware, they had to commit to using a computerized reporting tool. But
the incentives were making their practice work better, so their practice, they
basically could generate their referral letters, their report letters back to
the referring physicians more quickly with legible reports, and then there was
this sort of altruistic incentive that I’m contributing to the greater good,
I’m helping out my profession first, and general research second. So I think if
we’re creative we can come up with similar kinds of ways of incentivizing the
clinicians.

There is also, I mean I’ve heard people talk about if people have to spend
extra time to collect data we ought to pay them for it and I think there’s some
truth to that but there’s a limited amount of money to go around and we
shouldn’t assume that all data should have additional resource put into
collecting it, I think there are ways of collecting data using electronic
mechanisms that can actually increase the efficiency of the people who are
collecting it. So some research that goes to the heart of how do you collect
data more efficiently, time savings to the individuals so that they collect
better data with less time, there’s a sort of a win/win there that we really
ought to go after because there you actually save money and get better data at
the same time and I think it’s definitely possible. Not in all circumstances
but that’s the low hanging fruit we should be going after.

MR. BLAIR: Can I ask just a clarification, when you say better ways to
collect data, are you thinking of technically, or are you thinking of
professional guidelines for collecting data, or a clarification when you’re
saying we should explore that.

DR. SPACKMAN: Well definitely from a technical perspective, I think user
interfaces, terminologies, how they interact, there’s a technical side of it.
And also I would say your other suggestion I wasn’t thinking of but I think it
also is, it applies as if you have a professional organization like CAP that
can say to pathologists if you report using these data elements you’re going to
have a better report, you’re going to have a more complete report, the
oncologist will get better information, that probably saves time ultimately, I
know Liz Hammond did a study at University of Utah where she got fewer phone
calls back from the clinicians about her cancer reports and its actually saved
them time by following a checklist because they then had less runaround in
following up on loose details that could have been included to begin with. So I
think in general you get efficiencies by following those kinds of checklists
and protocols.

DR. CARR: Harry, this is Justine, I have a question.

MR. REYNOLDS: What we may do Justine is we’re really running long on this
segment, what I’m going to do is Judy, okay, Judy has given her time to you and
Stan if you wouldn’t mind maybe we can cover other questions at the 10:30 to
11:00 when we discuss, is that okay? Because otherwise we’re not going to get
through this as we need to this morning. Justine, go ahead please.

DR. CARR: Just as we talked about audiences for data I wanted to make sure
that we cull out the other audience which is the quality initiatives because
the parallel process that’s happening today is very specific datasets are
evolving and the only way to get that data right now is nurses or clinicians
reviewing handwritten notes to get that information out. So it’s a little bit
different from payment and the public reporting because it’s sort of disease
specific and it may evolve, the data that we’re looking for this year on heart
failure may change to something else in a year or two and so it’s sort of a
different, it’s not data that exists that we’re going to roll together but it’s
a question that we formulate that we then go back and dive into the clinical
notes to try to cull out and I think that it’s just important that we think
about the quality initiatives, pay for performance, all of that, that we still
have no crosswalk to the clinical notes.

MR. REYNOLDS: Any comments, Dr. Spackman?

DR. SPACKMAN: No, I agree very much.

MR. REYNOLDS: Okay, we’re going to take, we haven’t even been able to map
these two clocks, so I’m going to pick this one as the real clock, so let’s
take about a six minute break, let’s come back at 10:00 so we can, six minutes,
and then we’ll hear from Dr. Cimino and then we’ll leave some time for
discussion and Stan I’ll make sure we get your questions.

[Brief break.]

MR. REYNOLDS: Okay, thank you.

Agenda Item: Secondary Uses of Clinical Data – Dr.
Cimino

DR. CIMINO: Well thank you, my name is Jim Cimino from Columbia University
and I thank you for the opportunity to come speak here, I also thank you for
the honorary Ph.D., I’m actually and M.D., and I’m going to speak to you today
about secondary uses of clinical data. I think is I think the fourth time I’ve
appeared before this committee talking about a number of things and a lot of
times its been about ways of reusing data and these are things that we’ve been
doing in our own institution and things that I’ve talked about before. However
in the last few years I’ve been working more and more in vendor settings and
with health care institutions that are not able to or not willing to develop
the terminology approaches that we’ve been using at Columbia, and so I’ve been
getting somewhat of a taste of the real world and I come to you humbled by that
experience.

One of the perks of being in this vocabulary dodge other then eating the
bagels on the Delta Shuttle is following Kent Spackman, which I’ve been doing
for some 20 years and being able to learn from him, so I’ve learned quite a bit
today. Also from Stan Huff, and one of the things that Stan said to me when I
asked him what should I talk about he said what’s the low hanging fruit.

And that got me thinking as I sit behind a table listening to vendors talk
to customers and customers complaining about the systems and trying to make
them work in the real world, trying to figure out how would they fix this
problem without having to have a full blown clinical informatics department
building things like the medical entities dictionary that we have at Columbia
or the help system at LDS or any number of other systems that have been built
at academic medical centers, how would they get where they want to go. And so I
started thinking about what are some of the problems and focusing on some
specific areas of data reuse and so I’m going to use just a couple of examples
that I’m bumping into on a daily basis but I think that they’re good
representatives of other areas of data reuse.

So data captured using local terms for the most part in the real world, and
vendors and users don’t know how to translate those local terms into controlled
or standardized terminologies, even if those terminologies exist they’re not
sure how to get there.

The vendors and users don’t know how to aggregate local terms into useful
classes for doing the tasks that they want to do. A lot of data reuse requires
the transformation of data from its raw form into some other form, often an
aggregation class, for instance a patient might be taking a specific drug and
you want to know is the patient on a beta blocker, which was an example that
Kent had in his slides. So we never say oh, put the patient on a beta blocker
and make it ten milligrams while you’re at it, we always say a specific beta
blocker, but when we want to ask that rule we need an aggregation class and so
in the real world there’s not a lot of knowledge about how to do that
effectively.

The result is that people hardwire terminologies into their systems so when
they write a rule that says is the patient on a beta blocker, they don’t say
that, they said is the patient on this drug or this drug or this drug or this
drug, so they’ve hard wired these things into their systems and that causes
problems for them.

So one example is summary reporting, now summary reporting you might not
think of that as reuse of data but it’s a kind of reuse because data are
initially collected for one particular purpose, they’re presented to the user
or to the clinician for that purpose, and then anytime we use it again I tend
to think of that as reuse, so even summary reporting is the most maybe
archetypal example of reuse of data and part of the reason is that we have to
aggregate it. In a typical laboratory report we aggregate the data into columns
or rows based on some grouping that we’re interested in seeing over time,
laboratory systems typically don’t give us a single code for hematocrit, they
have two or three or four different codes for hematocrits because they have
different ways of collecting the data and they want to keep them separate in
their coding systems, and we have to acknowledge, but when we summarize them we
want to ignore those differences and just see if clinically useful stuff.

Another example, order entry systems, clinician order entry systems, a hot
topic right now, why is that, it’s not just about legibility, it’s also about
getting the systems to help us with the order entry process and do some
checking whether those are full blown medical clinical alerts, or whether it’s
very simple things like making sure you didn’t order the same drug twice. Well,
if you have, if you write a rule that says if this drug has already been
ordered then send an alert. Well, that’s great if you’re looking at a specific
drug, but if you say well if they’ve ordered any coding drugs and they’re
ordering a coding drug I want to tell them that they’ve ordered a coding drug.
Now how do I do that without enumerating all of the coding drugs in each and
having one rule for all the coding drugs and another rule for all the aspirin
drugs and so on and so forth.

So this is a Sumerian clay tablet, I like to use it as an example of very
easy to read, if you can read Sumerian, easy to read data, that’s very
permanent, and it’s great for its intended initial use. It’s a little difficult
to reuse the data so that if you want to have for instance summary reporting
over time it’s a little hard to reuse those data, but we do it, laboratory
systems do it by saying okay this Sumerian clay tablet goes in this column and
this one goes in that column, and we can produce a report that has all the data
the way we want to see it. But it’s all hard wired.

So it starts to break down because the labs are using these local
terminologies and the summary reports map the local terms into columns but then
the terminology in the laboratory changes and that doesn’t automatically
transfer to the reporting program because the reporting program may not even be
being done in the laboratory, it may be being done somewhere else and so you
get this breakdown in the process.

So here’s our Sumerian clay tablets, if we suddenly started using a
different kind of Sumerian clay tablet and don’t update our lab we’re going to
lose data from our reports and that’s a problem, it’s not just a problem of
well the report is kind of ugly, clinical decisions are based on this kind of
information and so at the very least we might end up reordering tests over and
over again because the results aren’t showing up in our summary report, and
that might be just costly but it also might be dangerous to the patient. But at
worst we may be missing dangerous trends that are occurring and would require
action.

So lest you think I’m making this us, this is an actual screen shot of a
clinical information system, commercial clinical information system that will
remain nameless from a large clinical institution that will also remain
nameless, and I know it’s hard to read there but I what I want to point out is
this is basically a bunch of lab tests, it’s a lab summary, and there’s a
glucose level. Now the fact that glucose level is listed under lytes, or
electrolytes shows that the person has forgotten their basic organic chemistry,
but glucose level appears here but it also appears down here, a whole blood
glucose is down here under blood gasses. So if you wanted to know all the
patient’s blood glucoses you’d have to look in at least two places and actually
this screen, if it were the full length, if I were displaying the full length
that this scale would be about four or five stories high.

So if you’ve got data scattered all over this display it’s going to be very
hard for a clinician to find it and be able to interpret it properly and that’s
just one example, there are others too, there’s a hematocrit under blood which
sounds good but there’s also a hematocrit under CDC, those are not the same
data, those are different tests from the laboratory, different local terms, and
they’ve ended up getting mapped to different rows in this summary report. So
this stuff, this is happening in the real world as we speak.

So order checks have the same problem, we have a catalogue of orderable
items typically in an order entry system and those catalogues typically lack
any kind of fine grain classes. They might have large grain classes like drug
or laboratory test, but they don’t have things at the level of hematocrit or
beta blocker or coding medication. And so the checks, for instance duplicate
orders, have to use explicit lists of terms if they want to do more then just
say this order thing is exactly equal to this one, you could write a general
rule for that. But if you want to say well, if I’ve ordered any coding drugs I
need a list of all the coding drugs in my check to make sure that I’m not
missing one.

The problem is this list can be incomplete because the people generating the
lists have to do it by looking through the order catalogue manually usually and
the lists become outdated very rapidly as the things in the order catalogue
change. So there’s no mechanism for knowing how or even when you have to update
the lists and when I talk to people about this and say well what happens when
the pharmacy adds a new drug and they say well the order alert committee is
supposed to add it to the list. And I say well how does the order alert
committee know that a new drug has been added to the list and they say well
they should be checking with the pharmacy to see what drugs have been added and
then figure out what lists to update. So you could imagine that might work but
it would be also easy to imagine that it doesn’t work.

So terminologic solutions, and these are all things that everybody is
familiar with, first we start with a standard terminology that has clinically
useful terms, and there are plenty that are available. We want to map the local
terms to these standard terms without any substantial loss of meaning, so the
fact that I’ve got an Allen Pavilion Chem Seven or an Allen Pavilion Glucose
test is not so important as the fact that I’ve got a glucose test, so I want to
map these things to a level that maintains their clinical meaning.

Then the standard terminology should also provide aggregation classes that
will allow me to reuse the test. Now in my example on the screen there I had a
whole blood glucose and I had a serum blood glucose, now I would not erase
those differences because they’re clinically significant some of the time but I
would still want an aggregation class that would allow me to say what are the
glucoses in the blood and serum in the plasma, I call those intravascular
glucose but it could be called anything else.

So we have this mapping, this from local to standard to aggregation, so then
at the bottom are some local terms from our system, Allen Pavilion Glucose,
Stat Glucose, and a Gluc, which is you can figure that out it means glucose but
the lab doesn’t tell you anything more then the name Gluc so you have to just
know what that means. And then we have our standard terminology that might look
something like this, we’ve got a class of lab tests, we’ve got intravascular
glucose tests, and then we’ve got some subclasses under there of other tests
and what we want to do is map to the clinically useful terms and then we can
use these other terms for aggregation as applicable in our various
applications.

And this is just an example of lab summaries at our institution where we
actually do this, so that we don’t have a standard terminology, we do have a
terminology that allows us to map from the local terms used by various lab
systems into these aggregation classes.

This was a lab summary that was done at the end of 1999, before Y2K hit and
turned the system off forever, and it was generated automatically using these
aggregation classes, and the same day I shot that, the same moment I shot this
picture I shot this picture because we were moving to a web based system, a
completely different architecture, a completely different system that was able
to use the same aggregation classes to produce the same table although they
decided to do it as columns rather then rows in this particular thing. But if
you look at the handout you’ll see that the numbers are actually identical.

Okay, another example that we use successfully, this is WebCIS, this is our
web based clinical information system, it’s basically a web based viewer into a
large repository that we’ve been accumulating for the past 18 years or so. And
I’m showing on the screen some laboratory tests and in the center there is a
prothrombin time and next to it is a little blue blob which is actually an info
button that has a white letter I in it although I can’t even make it out from
here. But if you click on that you get this list of questions and these
questions are determined by the lab test that that little blue icon was sitting
next to.

Now we don’t have a list of questions for every specific lab test, instead
we have a list of questions for the aggregation classes of these lab tests. So
in this case these are questions that come up about any prothrombin time and so
one of the first questions there is what is the NYPH, or New York Presbyterian
Hospital guideline for managing adult patients with elevated iron due to
Warfarin. So what we want to do is we want to make sure that the right
guideline gets out to the clinicians when they’re looking at these lab tests
and the only way to do that in our opinion is to have an aggregation class of
these lab tests so that if the lab adds a new prothrombin time you’ll still get
this question when you click on that icon and when you click on the question
you get the guideline popping up. So it’s a way to deliver this information
quickly and efficiently and it requires these, in our architecture at least
requires these aggregation classes because maintenance would be a nightmare
otherwise.

So there are things called integration engines which are out there in large
institutions where they have multiple systems and want to bring their data
together into a repository or shift data from one system to another. And these
integration engines actually offer a solution because what they do is they use
translation tables to convert local terms into other terms usually into some
standard term, but it could be into a different local term, but there’s a
translation table that says here’s a list of local terms and this is what we
want to call them for use outside of the system that generated it. And then we
can actually do aggregation through that translation table so for instance I
can say, if I wanted to I could have a translation table that says okay here’s
a list of all the drugs and now anytime I see that drug just send out the code
for beta blocker and then I’ve done my aggregation.

There’s some problems with that. First of all this only works for extrinsic
data because the integration engine sits between systems not inside a system
that might need to do this aggregation. The translation is static and it maybe
unidirectional, once you’ve translated it it’s translated into that new form
and so if the standard terminology changes you don’t have a way to go back and
figure out that this might be a new term. The aggregation is also static so if
you’ve aggregated it into beta blockers and then later you want to say well
what kind of beta blocker you can’t because you’ve translated it into this
class called beta blocker, and you can only get one, usually you only get one
aggregation class per term because it’s a one to one translation or many to one
translation but not a many to many translation. So there’s some drawbacks to
this approach but at least it’s going in the right direction.

So in answer to Stan’s question to me what are the low hanging fruit I
thought about this and thought well, where could we get to by doing some of the
easy stuff. Well, first of all we want to think about what data are captured in
coded form because that’s where we start, so I don’t want to start with natural
language processing and doctor’s notes and that sort of thing, coded data. I
want to pick domains where there are good controlled terminologies out there
already so I don’t have to invent them. And I want terminologies that have
clinical level terms, things that map well to the level of detail that are
important clinically. But I also want them to have aggregation classes,
preferably multiple aggregation classes, preferably multiple levels of
aggregation classes. And they also have to have a responsive maintenance
process because these underlying local terminologies change, we need to have
ways to map them into the standard terminology as it goes forward.

So with that I put together a little table that is just a prototype of what
we might end up with. But I’ve got a bunch of categories on the left hand side,
lab results, problem list, medications, and allergies. Those are data areas
where they’re already in coded form and there are good controlled terminologies
out there —

MR. BLAIR: I missed one, lab results, problem lists —

DR. CIMINO: Medications and allergies.

MR. BLAIR: Thank you.

DR. CIMINO: So the first column there is are the data coded, lab results are
pretty much always coded in laboratory systems. Problem lists are sometimes
coded, so where they’re coded we can try to tackle this. Medications are pretty
much always coded in some form or another, allergies again like problems are
sometimes coded, sometimes they use medication codes but then for other
allergies they might just be free text. And then are terminologies available
for lab, clearly LOINC is available for lab. Problem list, we have a couple
choices, ICD-9-CM, which is typically used in problem list because it gets you
the billing code right away and SNOMED is another one that’s available.

Medications, there’s NDC codes which have been around for a long time and
there’s RxNorm which is out, which is a recent addition to the armamentarium.
And then like problem list allergies, SNOMED and RxNorm cover the medications
well and certainly RxNorm doesn’t cover things like milk allergies and that
sort of thing, SNOMED does a pretty good job of that. And when I say pretty
good I mean somewhere between okay and perfect and I haven’t done that
evaluation but it’s somewhere in a good place.

Now do these have clinical level terms? Laboratory, LOINC has clinical level
terms, the ICD-9 often has clinical level terms but not always, sometimes it’s
got the pesky not elsewhere classified term which is not a real useful clinical
level term unless you want to treat people with antibiotic not elsewhere
classified. SNOMED has mostly clinical level terms and again by that I mean
maybe it’s close to perfect but I haven’t done that evaluation. Medications,
NDC has clinical level terms if you’re interested in products but it doesn’t
have clinical level terms if you’re interested in orders so there’s no NDC
code, if I say I want to give diazapan(?) five milligram tablet there’s no NDC
code for that, there are a whole collection of NDC codes for specific products
that map to that but no single one that I would use so that’s why I have a yes
and no in that column. RxNorm though does have clinical level terms, in fact
that’s the reason it was created was to provide the notion of a clinical drug.
And then SNOMED and RxNorm cover allergies mostly, they do well at the clinical
level where they have the coverage.

Now what about aggregation classes, LOINC has aggregation classes although I
haven’t studied them to see how good they are but my sense is knowing the LOINC
structure that it should be pretty easy to create aggregation classes like find
me all the serum sodium tests or find me all the intravascular sodium tests.
Problem lists, ICD-9-CM has some aggregation classes but doesn’t really do a
very good job in my opinion. SNOMED has I think good aggregation classes for
problem lists as well as for allergies, NDC does not have aggregation classes
but RxNorm does have aggregation classes for both mediations and allergies.

And finally the responsiveness of the development, everybody gets a yes
except for ICD-9 and NDC. ICD-9 you get updates once a year and if they listen
to you you’re lucky and NDC doesn’t do updates at all, it relies on the
manufacturers to do the updates so the distribution of the updates is spotty at
best.

So this is sort of a starter set of some of the domains that I think would
be the low hanging fruit because especially there are good terminologies out
there, notably LOINC, SNOMED, and RxNorm, that I think we could do an awful lot
with if we could figure out how to get the locally coded data into these forms.

Now what could we do with those just few domains? We could do a lot, summary
reporting of laboratory data across systems, automated billing, because we can
get our problem lists into a form that could be used for ICD-9 reporting. Order
checking, if we could do the aggregation classes, alerts, expert systems, all
these things I think we can think about how we would do these things if we had
just those limited sets of data, labs, problem lists, medications, allergies,
we could do a lot of very interesting powerful things once we get there.

So what do we need to exploit these data? First of all we have to have
terminologies that are readily available and I think that we’re there,
certainly LOINC, NDC, or LOINC, RxNorm and SNOMED are all readily available. We
need terminology servers and services to help the vendors and the users get at
these terminologies, in particular for doing the things like translation and
aggregation. We need to be able to map the local data to clinical terminologies
and then map between these clinical terms and aggregations. So users need to
understand how to use these aggregations and to demand it in their systems, to
be able to say I don’t want to write an alert that has 15 drugs listed in it
that I have to review every day to make sure it’s not out of date, I want to be
able to put one code in there and have it work automatically dynamically as the
terminology changes. Users don’t understand that that’s a way they could solve
their problem so they’re constantly saying you know what it’s easier to just
put in the list and try to understand how to do aggregation. And the vendors
need to understand this as well and provide it in their products and not just
pay lip service and say oh yes, we do this, when in fact they don’t.

So what can government do? Well I’m always leery of telling the government
what to do because you’ve got to be careful what you wish for, but I think it’s
safe to say that it would be good to continue to support the instruction,
maintenance and dissemination of the terminologies that are already out there
and the three examples I’ve used have all been done in some degree or another
with the help of the government.

We want to find ways to provide incentives to get that local data into
standard data, so that data don’t get reported on pieces of paper anymore, they
get reported in LOINC, well medications get reported in RxNorm and problems get
reported and SNOMED and let somebody figure out how to convert them into
billable codes for reimbursement.

I thought a lot about how do we get there with the users and the vendors,
and by users I’m talking about the people that buy the systems from the vendors
and actually implement them, and I think educational efforts might be a good
first step, to try to make people understand clearly the benefits of this
approach of local to standard to aggregation so that they can kind of see it
and see it in their own settings how it would benefit them.

And then there’s always research that has to be done, we have to support
application development for finding ways to get applications to use these
terminologies, the maintenance and dissemination of terminologies, and the
mapping methods are all areas that are still largely only done in academic
clinical informatics groups that have been doing this for a long time and are
good at it but we don’t have a way to get it out there for the rest of us.

And then aggregation methods last.

And that’s it, you wanted me half an hour I did half an hour.

Agenda Item: Discussion/Follow-up on Secondary Uses
of Clinical Data

MR. REYNOLDS: You did great, I appreciate the way you put that together.
What I’d also like to recommend is Dr. Spackman if you don’t mind joining us at
the table since we’ve got between now and 11:00 on our schedule to discuss all
this and I know Stan had some other questions and we want to kind of weave this
all together let’s just get everybody at the table and we’ll go ahead and make
sure we cover, maybe get perspectives from both of you as we go through things.
So Jeff is on the list, then Judy, then Steve, okay, Steve.

DR. STEINDEL: Jim, thank you very much, I enjoyed the talk a lot but this is
a pet peeve of mine and I want to put a slight factual correction. LOINC does
not really have publicly available aggregation logic, it’s buried in their
system inrealma(?) and we’ve spoken to them multiple times about making it
publicly available. And if one of the funders of LOINC will listen to me saying
that again and please put it on her list I would be most appreciative.

DR. CIMINO: I guess I’m too close to LOINC, I’m only peripherally involved
in the LOINC committee but I use LOINC and so the knowledge structures actually
look to me like very readily, but you’re right, they don’t have codes for
aggregation classes, I haven’t paid that much attention to whether they have
them or not because I don’t use them —

DR. STEINDEL: We’ve actually spoken, I’m on the LOINC committee as well and
we’ve spoken to LOINC multiple times about making these more public.

MR. REYNOLDS: Jeff?

MR. BLAIR: I think this is a quick question and Jim you indicated you’re
working with some vendors now and I know Kent through the SNOMED users group
has been working with vendors, and clearly when you wind up being able to
communicate to a vendor of electronic health record systems or e-prescribing
systems or order entry systems, I’m kind of broadening it just a little bit,
and you point out the benefit of clinically specific terminologies, SNOMED,
LOINC, and RxNorm, they’re going to look at it from the standpoint of their
immediate customer, who’s going to purchase that system and pay for that
system. Both of you obviously know that there’s a much broader picture here and
that gets to the secondary use of information, the vendor won’t look at it that
way. Is there anything, any suggestions that you have to us, not necessarily to
get the vendors to change their minds, they still may be driven by their
immediate needs and meeting their customer needs, but do you have any thoughts
about what we could do to encourage the vendors to at least facilitate or
enable the use of their systems so that the nation gets the benefit of the
secondary use of this clinically specific information?

DR. CIMINO: Okay, well, taking the perspective that I presented here which
has to do with translation and aggregation, because there are other ways to
reuse data but I’m taking this specific one because I see it as a way to get
somewhere quickly.

I think that the vendors will do that with the right pressure, the vendors
originally said you don’t need HL7, now they all provide it, then they all
provided HL7 or they said they did and then eventually when you started, when
the rubber met the road and you realized well this isn’t really HL7 now they’re
finally actually giving us real HL7 interfaces. So I think we need to follow
that kind of a path, clearly the vendors won’t do this until two things happen,
one is they have to see how they would accomplish it within their own systems,
how they would provide that functionality without having to retool their entire
systems.

And secondly their users or their customers have to demand it, so that’s why
I had education for both of these groups as one of the things that should be
promoted. The customers have to realize that despite all the flashy Power Point
slides from the vendors they’re the ones that are going to be left holding the
bag when the vendor finishes installing and goes home, they’re the ones that
are going to have to maintain the rules and maintain their systems for
aggregation classes and deal with the changing laboratory terminology and
SNOMED terminology and all the terminologies that they want to use in there.

The vendors right now will say oh you want to use SNOMED, great, here’s
where you put SNOMED, and it’s just a big empty table and it basically says
customer’s terminology here and if the customer says I want to use SNOMED then
you pop it in there and the customer runs into the kinds of problems that Kent
talked about. And same thing with LOINC, the laboratory system says oh you want
to use LOINC codes, no problem, here’s where you put your LOINC codes. But then
it turns that the laboratory doesn’t want to just use LOINC codes, they want
their own codes, and they want to be able to map to LOINC and so they need two
tables and ways to map between them and then the vendor is gone and they’re
stuck with this product.

So the customers need to understand what functionality they want and what
the vendor has to provide to give them that functionality so that they’re not
left with these unmaintainable systems.

DR. SPACKMAN: Just one other activity that I’m involved in that I think may
be helping here is that the certification process, I’m volunteering on the
interoperability workgroup for the certification commission on health care IT
and I think that group is also trying to push towards, not immediately but over
the next two or three years push towards having certification standards that
the vendors can come forward and say we’re doing a better job then our
competitors because we are providing better interoperability, I think that’s
another possibility. So to the extent that the government can support that
activity and pay attention to it and also push it forward I think that would be
helpful.

DR. CIMINO: If I could just add, is it pronounced CCHIT, or how do you
pronounce that?

DR. SPACKMAN: CCHIT.

DR. CIMINO: One of the things that the interoperability workgroup then needs
to look at is how terminology, what the terminology specifications are that
need certification, so those have to be made explicit and that’s an important
part of the process.

MR. REYNOLDS: Okay, Judy and then Stan.

DR. STEINDEL: Kent, when I missed your talk I actually was on a CCHIT
conference call and I appreciate the comments that you made because really what
the main essence of that call was in relation to the Katrina incidences from
the department and the need for transferable health records in case of a
disaster, I think the interoperability phase of CCHIT is going to be
accelerated. So we may be seeing more of the type of thing you’re envisioning
more rapidly then you originally were thinking.

DR. WARREN: My question is really the one that I bowed to Justine and
Justine added another component to it, and that was a comment that Kent made
and Jim has also kind of referred to it implicitly in some of his slides, and
that’s the notion of what data should we be capturing for secondary use and
making those decisions. And Kent through in there for the first time it just
kind of hit me of we should be looking at the professional societies to start
defining what are these minimum datasets about different issues and CAP has
done a nice job with the checklist that they’ve developed, the way that they
have always taken a look at structured data in their reports, probably leading
most of the specialty areas regardless of whether you’re in medicine, nursing,
or social work or whatever. And then Justine brought up the fact of we have all
of these quality organizations that are coming out with different criteria and
things that they also want collected so that you can report outcomes, and then
I’m thinking about the regulators like JCAHO and those who are saying we also
want you to collect this data in order to have accreditation.

So the issue comes to my mind first of how do we get the professional
societies engaged that maybe part of their work is to help us identify some of
these datasets, and then I have this horrible flashback of looking at the
professional organizations, the guideline developers, the quality people and
the regulators, who adjudicates all of these data elements so that we don’t
wind up like the UK with 17 different forms for reporting smoking cessation. So
I’ll open that up to either one of you.

DR. CIMINO: Let me go first because I have the short answer which is I
specifically avoided that by picking low hanging fruit, so picking things like
things that we’re already doing well, there’s a lot we could do with those,
let’s get started on them and maybe we can add, then once we see the value with
that people are going to go oh, you know we really need to have better
terminologies for allergies, or why don’t we start doing terminologies for pain
scales, and then we can start building on the beginnings that we get from
what’s already available.

What we definitely don’t want, people talk about, especially government says
we’re going to require additional reporting and that’s going to improve
quality, well, it’s going to deteriorate quality because we’re going to spend
all of our time recording instead of taking care of the patient and recording
what’s really going on with the patient. And so from that I transition to Kent
because he’s got the other domains, the domains that are not the low hanging
fruit, the things like the oncology terminology, which I’m really glad I didn’t
memorize in medical school —

— [Laughter.] —

DR. SPACKMAN: It occurs to me that there needs to be some kind of forum
where these groups can come together though and I don’t know of any where
somebody who’s defining one set of minimum criteria for reporting on how well
you took care of diabetic foot care gets together with another group, one group
is looking at vascular diseases, another group is looking at endocrinology
diseases. I don’t have any suggestions but it seems to me that this committee
might be able to play a role, once people recognize that there’s a need for
that coordination, you’re anticipating that people will see that need, I don’t
think they’ve seen it yet perhaps.

DR. WARREN: I agree with Jim, the low hanging fruit is where we need to
demonstrate that this is a good ROI, good benefit, etc., but I’m kind of
looking more down the road of are there recommendations that we can make now
that will set us up to get ready for once people recognize this is really what
we want, and then it’s a complex issues. So again, any suggestions that you
think of later on about recommendations we might make —

DR. CIMINO: Maybe a national committee that’s interested in health
statistics might be a forum —

— [Laughter.] —

MR. REYNOLDS: Justine, did you have a comment on this?

DR. CARR: The Quality Workgroup had hearings on things related to the
quality metrics and so on a while back and one of the things that we heard was
exactly this, the definition of smoking or the definition of acute myocardial
infarction, these were different between JCAHO and CMS and it created enormous
workload for the institutions. So this was a major milestone last year that
JCAHO and CMS agreed on a certain of those myocardial infarction definitions.
But I wonder if the role of the committee might be to make a recommendation
that there is one body that adjudicates, I think we also heard from AHRQ that
they would be happy to do that but what’s missing right now is the assignment
of authority for adjudicating among the various recommendations and actually I
think we heard that from Brent James as well, that if we identified one group.

The other thing I’d like to say is that I really like the ideas that Dr.
Cimino put forward in terms of we have, we’re approaching quality from two
perspectives, the clinical groups and even JCAHO and CMS, they have very
spectacular ideas about what we should know to make care better but with an
enormous amount of work. And the ideas proposed today are let’s make building
blocks, let’s say if we could simply know about the drugs and the labs in
certain conditions, and we knew about it across the board, across the country
we would be so much better. So there’s a great appeal in creating a
coordination of working with these structures that we already have and building
up as opposed to beginning with a national disease condition and then demanding
the data regardless of whether it’s extraordinarily labor intense, or whether
it’s easily gotten.

MR. REYNOLDS: Okay, Maria has a comment on this and then Simon, and then let
me say this to everyone, then I’m going to turn it back over to Stan for all
his questions and to close the program on this one.

MS. FRIEDMAN: My comment is just one of kind of an organizational behavior
kind of question because while I agree that you need some body to adjudicate
all this the question is are people really going to give up their little turf
and their little silos? Following up on Judy’s nightmare, I see people walking
around with 15 different PDAs around their neck, once the specialty societies,
and there are a bunch of them, wake up to the need for doing this and think
it’s really cool and they all have their specifications and their minimum
datasets and their own terminologies and on and on, it’s been a problem in the
past, people don’t want to give up their own little piece because they say it’s
unique, and it’s not necessarily unique but it feeds into the adjudication
process. And that was just a comment, I think it needs to be done, I just don’t
know we’d do it given that people don’t like to give up their data and their
uniqueness.

DR. CIMINO: but if you think about each of these datasets as a view onto
what the real data are then it becomes less difficult because everybody can
have their own datasets. So if you document what the patient is actually
smoking then different people can have criteria to decide whether this is
really a smoker or not and they can derive that from the original data. So I
think it gets problematic where you’re trying to decide whether somebody has
got one kind of a tumor or not but there’s some areas at least where I think
you take the original data and these datasets become views on those data, not
new storage of data, not new capture of data, but just simply interpretations
of data that are captured at the clinical level.

MS. FRIEDMAN: Because I think that gets back to the point both of you made
about educational efforts, I think those are very important and my personal
belief is if those are ever done they’re done as an afterthought and they
should be built into the process from the get go because once you explain to
people how that works, that you have this underlying body of data and then you
can have different takes or different slices of it for your own particular
needs, you don’t have to have all these data silos, and 50,000 forms for each,
it’s a good thing.

DR. COHN: I want to thank you both, I think it’s been a very interesting set
of conversations. I think that we’re all obviously supportive of the building
block view of the world, that’s why we’re talking like this, that’s why we’ve
been talking like this for ten years, trying to make this all happen.

I guess I have sort of two questions and one of them is just maybe a
clarification, maybe Justine can answer it as well as anyone, as you guys were
all talking about the need for some sort of reconciliation of all these quality
measures I guess we’re all looking at each other thinking the National Quality
Forum I thought was the group that’s been doing that, so maybe I’m confused and
I guess I need to go back and ask them but maybe Justine could clarify.

Now the other question I had for our speakers was obviously there’s been a
lot of focus on mapping of things like LOINC and all of these things to make
everything work and I guess I’m wondering if maybe there’s actually even a more
fundamental principle that vendors as well as, I mean I know there’s these
large labs out there and things like this but maybe there ought to be an
expectation at sort of the point of manufacture that these things are fixed on
to things, I mean is that unreasonable to expect at some point that we will be
dealing with, I mean if there’s mapping it will be completely transparent to
anybody who’s getting this stuff as opposed to some sort of the after the fact
mapping? Is that a vision for the future? Am I missing something here? Kent?
Jim?

DR. CIMINO: I think it depends on the domain you’re talking about, so if
you’re talking about laboratories there’s a finite set of laboratory
instruments and types of tests that are done, although it’s a large number, we
know, we can wrap our hands around it and say what those are. But there’s
still, there’s always local customizations that are needed to satisfy the other
tasks, the word flow tasks and things like that that get added on to all this
so that the code for a test for pediatric tests is different then an adult test
or whatever, so there’s always, it seems like it’s hard to get away from the
need for these additional local codes. And how they get mapped I think is
really just going to be a matter of better integration of standards into the
products, the systems that the users are using, that gives them both of those
and takes care of the mapping for them. So for laboratory I think it won’t be
too bad.

Now for medications it’s my understanding that the FDA is revamping the NDC
codes and coming up with a new way to get at these, get these codes, so that
they’re not determined by manufacturers and they can’t reuse the codes and they
can’t change the codes and they can’t change the ingredients in the things
without changing the codes, that it’s going to be better management and better
terminology principles for where the NDC codes go. So if that happens NDC and
RxNorm may become much more sort of one and the same sort of entity and they’re
you’re going, you mentioned manufacturers, there the government is going to be
telling the manufacturers if you want to play you’ve got to give us good codes,
we’re going to have good codes for your products. So there’s an area where it
would work somewhat differently in the laboratory, I don’t see us going to the
laboratory analyte vendors and trying to force them into terminologies, that’s
too far away from where we’re trying to be.

MR. REYNOLDS: Justine, did you want to comment?

DR. CARR: I had a little trouble hearing part of that.

DR. COHN: Justine, my question for you was we were talking about this issue
of all these quality groups and all that stuff, I thought the role of the
National Quality Forum was to sort of deal a lot with I think the angst that
Judy and others were describing, am I missing something?

DR. CARR: They have, I don’t believe that they have identified, for example
the core measures that the people are required to submit today are acute MI,
heart failure, surgical prophylactic, pneumonococcal vaccine, pneumonia
management, and I think the National Quality Forum has their 27 safe practices
that don’t have any specific reporting requirements and they have, I know
they’ve done a cardiac surgery and maybe they’re doing cardiology but they were
late to these initiatives that have been ongoing for a number of years with
JCAHO and CMS so it was really, the official requirement, reporting requirement
is JCAHO and CMS. And so I hear what you’re saying but I don’t think, we should
clarify it, but I don’t think, at least what we heard last spring from Carolyn
Clancy was that it would help if there were an adjudicating body and there was
not as of that presentation.

DR. WARREN: The reason I brought this up is I know what the National Council
is doing but we have other groups, like the National Nursing Quality Indicator
Set, which is if you want to be a magnet hospital you have to contribute data
to that repository so that we can start looking at quality indicators in
nursing. And one of the biggest problems they faced is that they’re looking at
falls, they’re looking at pressure ulcers, they’ve got several other indicators
in there and the biggest problem they have when they start working with these
hospitals is to define what’s a fall, what’s a pressure ulcer, what is an
incidence, what’s a prevalence of a pressure ulcer, trying to get the data
elements right so that they can pull this into the database and then aggregate
it to a national level. So it’s very hard and I think there are all these
groups that are coming up that are doing similar things.

MR. REYNOLDS: All right, Stan, you get your turn and then if there’s any
time left Vivian had her hand up. You’re done? Okay, Stan, you’ve got the ball.

DR. HUFF: Well again I would just like to express my appreciation to Dr.
Spackman and Dr. Cimino for their ideas and their presentations and this good
discussion. In the back of my mind, again, I’m trying to drive to what could we
say, what recommendations could we make specifically or projects that we could
undertake, and that’s difficult. At the same time if you say general things
then sort of nothing happens so some things that really kind of struck me in
terms of actionable things that we could do from the discussion and get you
guys to reflect on this and think of other things.

One is in fact, I just to second what Simon was saying, to get manufacturers
of lab data to put either LOINC codes or SNOMED codes, and I don’t care which
honestly, with their test so that you know a standard coded representation for
that thing when it comes out of the laboratory and there may be other local
codes created but that would cause those things to have an understood well
defined meaning and that would be a much more efficient way to do it then what
we do now. Because what happens now is that the test kit comes out, a thousand
laboratories use it, each one of those thousand laboratories does the mapping
and puts in labor to map from that to what the name of it is in their local
system. And if it started from the manufacturer then that code would come from
the manufacturer, it would be now part of the instrument, it would be sent by
the instrument by automated interfaces to the lab information system which
would be sent and so it would track through all of those systems from that
common source and it would be a much more efficient way to install that. So I
think following what Simon said that would be wonderful.

Second thing is again, and again that’s focusing on the low hanging fruit,
that’s saying that’s a thing that we could in particular with lab that would
help the lab and the idea with the FDA and what getting in fact the tight
correlation between ingredient codes, RxNorm codes, NDC codes, so that that
part was all functioning and there was a single authoritative source for those
things and then they could be distributed lots of different ways but you have a
single way of doing that.

Again it speaks to efficiencies of doing this because again what you have
now is you have, you have people producing drugs and then all of the drug
knowledge vendors individually ask, or are trying to track what drugs were
created, what the NDC code is for that drug, what are the ingredients in that
drug, and they’re all doing it independently whereas, and I know this is the
intent already of the FDA, if that information were collected up front by the
FDA and then distributed to everybody, that would be a much more efficient
process nationally.

And there’s still lots of room for the drug knowledge base vendors for value
added that they add in their own, by adding knowledge and allergy classes and
other things there’s still plenty of room there and it so it just, that seems
to be an efficiency that if we could somehow get into play that would be great.

Speaking to the issue that you raised, one of the things that’s been
frustrating to me in trying to install coded terminologies in the real world
like Jim is speaking about, we want, Intermountain Health Care wants to
participate in things like the cancer checklists and we also participate in the
Society of Thoracic Surgeons database and we participate in National Registry
for Myocardial Infarctions and that sort of stuff. One of the frustrations of
that is seeing questions come from those professional societies that say things
like did the patient have one or more episodes of chest pain 30 minutes before
surgery.

And so you’re in this quandary, the part of me that loves standard coded
terminology says, it’s a spike in my heart to see that kind of pre-coordinated
aggregation of stuff because what that means is yeah, you can say yes or no to
that question and all you really know is yes or no to that question and you
would like, I mean it just literally every one of these and some of it even
creeps into the cancer checklists. And what you see is in fact that there’s a
huge interest right now and Kent could probably speak to it in terms of
organizations that are either approaching HL7 because they want to standard
their data or approaching SNOMED because they want standardized terms, there’s
a huge interest in it and there’s no, they’re starting fresh like there is no
medical informatics knowledge base, that there’s no medical informatics
profession.

And so they create these kind of questions that say you can’t reuse data, if
you would just say look, if you’ll record the time that the surgery happened
and if you’ll record that the patient had pain and we assume that we have a
time stamped record where those things happen, I can make thousands of
different inferences from that kind of data and it doesn’t cost any more to
record it that way and then I can aggregate and I can answer the question did
they have chest pain 30 minutes before surgery or not. That’s something I can
do in a very simple rule if I assume I have time stamp.

So it seems like we really ought, going along with sort of what Jim
suggested in terms of education, it seems like something that we need to do
somehow is in fact produce probability some standard guidelines about good data
collection methodology or good data collection guidelines or something, and
proselytize that to all of these professional organizations that are now having
a huge enthusiasm for trying to collect data and trying to standardize data.
Somehow we got, I’m not sure how to do that but that’s one of the things that I
think that we could do.

So questions that I still have, so we talked about again just focusing on
the low hanging fruit in the areas of labs, meds, problems, allergies, are
there any standards that you guys can see that we need? Do we have any more to
say there really then please continue to support standard terminologies,
SNOMED, LOINC, RxNorm, are there specific standards in those areas that need to
be developed or is it a matter of continuing to just support those and continue
to support mappings and aggregations? Is that —

DR. CIMINO: I think that that’s certainly enough to get started, that’s
plenty to do just doing that and see where we might need additional
standardization. I’d also add that coordination among those terminologies is
going to be important if we want to do some more sophisticated things. So
certainly those overlap among SNOMED and everything and how do we coordinate
that so we make the bets advantage of everything.

DR. HUFF: That’s an excellent point.

DR. SPACKMAN: I’d agree with that, that there’s a real need for
coordination, there’s an opportunity to both save effort and to increase
quality by doing that and so I think the coordination is important.

The other thing that I would add is there’s probably a need for some further
development, research and development in tools for mapping from local
terminologies to the standard terminologies, I mean I think this is an area
where there are consulting firms that are happy to make lots of money doing it
but we might raise the overall level of people’s capabilities if there was some
further research and development into tools that can accomplish that because
just about everybody is going to have a local terminology that needs to be
mapped into these and the techniques for doing that are relatively well
understood and maybe some further research in publicly funded tools might raise
the boats, raise everybody’s boats.

DR. HUFF: So two other questions trying to drill down. Again, educational
efforts, I’m in complete agreement, saying it generally, I don’t know what to
suggest specifically, if we were going to do education about these issues to
people would we go to the, try and make inroads into professional society
meetings or how would we do this education for users and vendors?

DR. CIMINO: There’s a lot of different ways you could get at it, some could
be just sort of information but I think if we could find ways to do hands on
sorts of things like workshop where it’s bring your system and we’ll figure out
how to make it more terminology capable, whether it’s your own homegrown system
or a vendor system, I’m not sure how you do that efficiently, do you bring one
example and everybody learns from that, you have case studies or that sort of
thing, it’s not something that I’ve, I’ve never actually tried to teach that
aspect of it, I’ve always been involved sort of just getting people to
understand that terminologies are out there and what you can do with them but
how you actually adapt systems would have to go to the next level.

DR. HUFF: It probably is the users because I mean ultimately it’s the users
that demand and are willing to pay for these capabilities in systems and the
vendors, the vendors often will understand but they wait to do it until
somebody is willing to pay for it. So keep thinking about that.

The final thing in trying to wrap up here in my time allotted, so in terms
of how we could demonstrate the effectiveness of this, and this is where I
think we could focus on, we might could focus on one project that had to do
with public health reporting and another one that had to do with quality or
another one that had to do with patient safety, any of those. But it seems like
we need to focus down so that we could suggest again looking at the low hanging
fruit but suggest one specific sort of research, is there one specific project
related to diabetics that could be supported by this or a specific post
marketing drug surveillance that we could focus on, somehow so that we could do
something nationally that demonstrated in one or more areas the effectiveness
of this in order to sort of prime the pump and get people thinking along these
lines. Thoughts on sort of what research project or demonstration project we
could do that would be —

DR. SPACKMAN: One that has frequently been suggested to me by the
ophthalmologists is a diabetic retinopathy screening, that that would be a real
opportunity to demonstrate the value of the feedback loop, the public health
aspects, preventing loss of vision by diabetics and so on, there could be some
value in something like that. And there are genuine terminology issues in
diabetic retinopathy screening, that’s one example.

DR. CIMINO: I’d second that, diabetes as a domain because there’s a lot of
labs in it, there’s a lot of drugs in it and there’s a lot of morbidities in it
and so it brings in all these areas that we’ve both been talking about. And so
you pick one specific one then you don’t’ have to do all the drugs and all the
lab tests, you just have to do a subset to get started and to show value.

DR. HUFF: Thank you very much, I really do appreciate your thoughts.

Agenda Item: Review and Planning for Future
Meetings

MR. REYNOLDS: Thanks to both of you, excellent job and great discussion.

For the committee as we wrap up we’ll probably be scheduling a
teleconference, if the claims attachment comes out we’ll probably be scheduling
a teleconference to make sure whether or not we want to make any comments about
that, so Maria, I’ll leave it with Maria that if it comes out and we need to do
something that we’ll work out the time to make that happen.

The second thing, our next hearing is December 7th and
8th and going back to yesterday when we talked about some things,
two things I think would be useful and Jeff please jump in and add to this.
Stan I think it would be good if you think about what would be our next steps,
do we need more testimony, do we want to continue this, do we need other
things, what do we need to do, and Judy the same thing on, and I know you had
some ideas this morning, that we may need some other discussion. But as we look
at December we’re working with Tony Sheath to try to get the right people to
come in and kind of close out our ROI on HIPAA, we needed to talk to some
people that were actually implementers and what were the impediments that they
ran into, that was kind of the last piece we had there so that would close that
out for us. And WEDI would also be coming in to add anything further that they
had to that.

We would also it looks like be talking about the e-prescribing regulation,
discuss the pilot awards which should be out by then, so that kind of takes our
subject of e-prescribing and closes, starts really closing in on that. And then
out of our discussion yesterday CHI asked to come back and discuss with us
their allergy recommendation, so those are kind of the things based on what we
have and we’re going to update this based on what we talked about yesterday and
send it out to everyone but that’s kind of what we’ll focus on to build it but
we’ll obviously be talking soon again on this other conference call and then we
can maybe have something a little more final and if you think of anything
different that you would like or any positioning different then we can go ahead
and do that.

Because again, I think the key thing is if we get too many subjects going
and don’t close some of them and then we’re going to have so much in process
that we really don’t have anything, so trying to make sure we kind of keep a
balance of going way ahead and making sure we finish some of the things that we
already had on the plate, what’s that balanced portfolio we keep trying to deal
with. So any comments from anyone?

MR. BLAIR: One of the other things I think we need to add to our list is we
need to get February dates pinned down for our February subcommittee meeting,
we still haven’t done that yet.

MS. FRIEDMAN: Be nice to take it out farther too.

MR. BLAIR: Include February and maybe April or June.

DR. COHN: Can I make a comment on that? Because I actually was going to
comment that I would A, like to see, I mean normally what you try to do is you
try to schedule at least six months or a year, I mean I think everybody needs
to know that. I guess I would ask just personally if there was a way not to
schedule subcommittee meetings within two weeks of the full committee meeting,
and so I think we need to look at ways that we space these across and I think
there’s plenty of ability either before or after full committee meetings, so
I’d ask the help of the chairs in terms of that, I think it just balances
things out a little better.

DR. WARREN: I was just wondering since we have this full committee meeting
in November and then our subcommittee in December what your plans were for our
breakout in the full committee, did you want us to come back with strategies
for handling our issues? Like for Stan and I to come back with kind of a
summary of here’s where we are and here’s the next steps?

MR. REYNOLDS: I think that would be a good discussion for that —

DR. WARREN: So we’re not talking about bringing in testimony at that time?

MR. REYNOLDS: No, no, just you guys putting together a plan, we’ve got a
couple subjects going.

DR. COHN: Well, I guess I was just going to remind our co-chairs as well as
Stan as you heard from me I think the other week on this one, I think there was
a hope at the full committee meeting in November that there would be something
done to try to bring others up to speed on the discussions relating to
secondary uses of data. Now I don’t know how long we’re going to have, that’s
going to be a discussion with the executive subcommittee, but I mean I think
there’s been a feeling that there’s a need to bring others along else the
Quality Workgroup is going to have a similar set of hearings, etc., etc., so I
think there just needs to be, it doesn’t need to be decided now but there just
needs to be some offline conversation on that.

DR. HUFF: Along those lines what I was hoping to be able to do is for that
purpose as well as just for purposes of trying to focus the activity would be
to summarize key points from the testifiers that we’ve had, start making a list
of potential recommendations or just talking points that we could do. I think
we probably do want to have a couple of more people testify that have
particular interest in this area that could testify in December, but I think we
want to increasingly get to a point where we’re discussing within the committee
or if we have other people come and talk that what we’re doing is basically
saying this is what we think we understand, based on that this is what we might
propose as recommendations to do, and get people to reflect on those so that
we’re not just continually collecting information but we’re trying to
synthesize it and trying to come to some recommendation and actions that we can
take from it.

MR. BLAIR: One of the things that I think we, I think we can do this and I
think it would be very helpful because I happen to think that the whole topic
of secondary uses of clinically specific information is extremely important, is
that the recommendation letter set forth a strategy, a national strategy, and
the strategy shows the relationship of this initiative to the National Health
Information Network, to the harmonization of standards, and to the improvement
of quality, patient safety, and cost effectiveness in terms of a short term
view. And that the strategy therefore be a comprehensive strategy showing all
of those linkages and a long term way to get there and then within that long
term strategy with all of those linkages we could then include the specific low
hanging fruit elements and show how the low hanging fruit sets forth the steps
to be able to do all these linkages and get to the long term strategy. So
that’s just my thought on that.

MR. REYNOLDS: Stan, one possible comment or one friendly amendment to maybe
the direction, is to discuss what we’ve heard and the subject in the full
committee and then any recommendations be covered in our breakout rather then
already be discussed with the full committee until we have another opportunity,
maybe as this group, to kick it around. That would be my only recommended
adjustment.

DR. HUFF: And I was kind of mixing the two things together, I agree with
that, I wasn’t trying to preempt discussion —

DR. REYNOLDS: And I didn’t take it that way, I just want to make sure for
the record and of our process that we all —

DR. HUFF: Say the things again Jeff that you see this linked to in terms of
strategy, that NHIN and —

MR. BLAIR: I think that this needs to be related to the role that it plays
in the direction of the National Health Information Infrastructure, I’m going
to go back to that word, the infrastructure rather then just the network
portion of it. The other piece is the relationship to the harmonization of
standards because in a sense not only does it take advantage of harmonization
of standards it contributes to them as well, interoperability. And then the
third thing is that it has, I would like to think of it as near term uses,
where we get to think of like for example the DOCIT(?) program where DOCIT is
winding up saying that you should implement SNOMED, LOINC, and RxNorm, and oh
by the way we’re going to wind up gathering that data for performance
measurement and it starts to get to relate to pay for performance and outcomes
measurement and all of those pieces. So it relates to all of these pieces and I
think that the strategy should indicate that this is a major portion of the
overall national direction and that it relates to all of these pieces. And
there may be more, I may have left something out.

MR. REYNOLDS: Maria, you have the final comment.

MS. FRIEDMAN: Well actually it’s a two parter, the first is for the breakout
session for November, if after the teleconference on the claims attachment reg
you want to do a letter that’s an opportunity to put that together. And the
second is for maybe December or nearer term it seems to me that ONC, the Office
of the National Coordinator, that’s what they’re calling themselves these days
instead of ONCHIT, we might have a report out from them about their activities
and their RFPs and also their staffing as they start to staff up and who’s
doing what just so we have a lay of the land as to how their office is shaping
up in terms of portfolios.

MR. REYNOLDS: Okay, you and I and Jeff will look at the timings of what each
of these subjects might take. Yes, Justine?

DR. CARR: May I say something? Just in terms of, well, two things, I just
wanted to clarify the question in terms of the National Quality Forum from
their statement page, they have convened a panel of leading experts on quality
improvement and measurement to identify principles and priorities that will
guide national measurement and reporting strategy, and also the Forum will
endorse quality measures for national use. So I don’t know if that language is
that they can endorse but if there’s any sort of teeth to that but it’s a good
question and maybe things have changed, you can look further.

My other point is just how can the Quality Workgroup take a piece of what
you’re talking about and get more, have more hearings perhaps with the NQF,
CMS, and so on, to sort of get a sense of where they’re going and whether they
would be moving in a direction of data that could be gained in a more automated
fashion.

MR. REYNOLDS: Well I think the purpose, and I’ll speak for Simon and then he
can correct me, I think the whole reason that we talked about moving this
subject into full committee at least for a discussion purpose is to take a look
at any committee that deals with this new subject, how we should link or how we
should individually take pieces of this. So I think that’s what introducing
some of these bigger subjects to the Full Committee is all about. Then as we
talk them through we’ll know how to link rather then us meeting with any
particular chairs of any committee and trying to decide how to work it. Give
access to the full committee and then work it back down through the
subcommittees, so that’s kind of I think where we’re going.

Okay, I’d like to thank everyone for your attentiveness and the meeting is
adjourned.

[Whereupon at 11:15 a.m. the meeting was adjourned.]