[This Transcript is Unedited]

Department of Health and Human Services

National Committee on Vital and Health Statistics

Subcommittee on Standards and Security

December 7, 2005

Hubert H. Humphrey Building
Room 705A
200 Independence Avenue, S.W.
Washington, DC 20201

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway, suite 180
Fairfax, Virginia 22030
(703) 352-0091


P R O C E E D I N G S [9:05 a.m.]

Agenda Item: Call to Order, Welcome and Introductions – Mr. Blair and Mr. Reynolds

MR. REYNOLDS: Let’s go ahead and get started this morning please. Good
morning, I want to call this meeting to order, this is a meeting of the
Subcommittee on Standards and Security of the National Committee on Vital and
Health Statistics. The committee as you all know is the main public advisory
committee to the U.S. Department of Health and Human Services on national
health information policy. I am Harry Reynolds, co-chairman of the
subcommittee, and vice president of Blue Cross and Blue Shield of North
Carolina. I want to welcome my co-chair Jeff Blair, fellow subcommittee
members, staff and others here in person. I do want to inform everyone that
this is a public meeting and we are broadcasting on the internet so please
speak clearly into the microphone at your place. Also the meeting is being
recorded and transcribed.

With that let’s have introductions around the table and then around the
room. For those on the subcommittee please note any conflicts of interest
related to issues coming before us today and I will go ahead and note that one
of the speakers this morning is from Blue Cross and Blue Shield of Arkansas and
since I am from Blue Cross in North Carolina we actually have no major
affiliation but I did want to note that. Since it’s an informational
presentation I don’t think it will create any issues. Jeffrey.

MR. BLAIR: I’m Jeff Blair, I’m vice president of the Medical Records
Institute and co-chair of the Subcommittee on Standards and Security of the
NCVHS and I’m not aware of any conflicts of interest.

DR. STEINDEL: Steve Steindel, Centers for Disease Control and Prevention,
staff to the subcommittee and liaison to the full committee.

DR. HUFF: Stan Huff with Intermountain Health Care and the University of
Utah in Salt Lake City, member of the committee and subcommittee and I don’t
think I have any conflicts or issues that are coming before the committee

DR. FITZMAURICE: Michael Fitzmaurice, Agency for Healthcare Research and
Quality, liaison to the committee and staff to the subcommittee.

DR. FERRE: Jorge Ferre, staff to the subcommittee, no conflicts.

MS. GOVAN-JENKINS: Wanda Govan-Jenkins, NCHS, CDC, staff to the
subcommittee, and there are no conflicts that I’m aware of.

MS. GREENBERG: Marjorie Greenberg, National Center for Health Statistics,
CDC, and executive secretary to the committee.

DR. WARREN: This is Judy Warren, University of Kansas School of Nursing,
member of the subcommittee, I have no conflicts.

DR. COHN: Simon Cohn, associate executive director for health information
policy for Kaiser Permanente and chair of the committee, I have no conflicts.

MS. PICKETT: Donna Pickett, National Center for Health Statistics, CDC, and
staff to the subcommittee.

MS. O’CONNOR(?): Michelle O’Connor, Initiate Systems.

MS. BOYD: Lynn Boyd, College of American Pathologists.

MS. BYRNE(?): Terri Byrne, RxHub.

MS. WILLIE(?): Shelly Willie, RxHub.

MS. GILBERTSON: Lynne Gilbertson, National Council for Prescription Drug

MS. JACKSON: Debbie Jackson, National Center for Health Statistics, CDC,
committee staff.

MR. WHITTEMORE(?): Ken Whittemore, Sure Scripts.

MS. STARR(?): Shelly Starr representing American Consultant Pharmacists.

MR. BRADSHAW: Jerry Bradshaw, Arkansas Blue Cross and Blue Shield.

MR. DECARLO(?): Michael Decarlo, Blue Cross and Blue Shield Association.

PARTICIPANT: (?), Point-of-Care Partners.

MR. SCHUETH: Tony Schueth, Point-of-Care Partners.

MR. MARTIN: Ross Martin, Pfizer.

MR. ROBINSON: George Robinson, First Data Bank.

MS. WILLIAMSON: Michelle Williamson, NCHS, CDC.

MS. ZIGMAN-LUKE: Marilyn Zigman-Luke, America’s Health Insurance Plans.

MR. REYNOLDS: Okay, we’re awaiting our first speaker but we will, Karen is
actually parking right now, if she wasn’t parking we were going to go ahead and
switch up and what it may be is we’ll wait a few minutes for Karen because I’ve
got something else to pass out to the committee and then Shaun Grannis isn’t
here yet so we may have Jerry Bradshaw go ahead as soon as Karen is done if
Shaun is not here so that we can continue to flow.

While we’re waiting for Karen to arrive, those of you on the committee if
you remember we’ve had a number of hearings on HIPAA ROI and we actually had a
conference call in April about some possible recommendations that we had come
up with and drafted out of our previous hearings. And I’d like to pass those
back around today, not for resolution today but so that you in fact keep those
in mind as you are listening to the discussions tomorrow you look back through
these to make sure whether these still are in fact worthwhile, whether you’ve
changed your mind on anything that you’ve heard subsequent to this, and we’ll
take, because obviously our goal is to build off of these five that we’ve
already talked about, add whatever we hear new in the testimony tomorrow and
consider putting forward a letter in the February full committee meeting was
our process plan.

So we can avert that if we aren’t in agreement but I do want to make sure
that everybody steps back since some of these hearings have been spread out
somewhat to make sure that we do in fact come to some resolution on what we
want to take forward.

So with that completing that unless anybody has anybody has any questions I
would like to welcome Karen Trudel.

Agenda Item: CMS Update – Ms. Trudel

MS. TRUDEL: Good morning, I apologize, there was a detour around the

I’m just going to do a short update on HIPAA and e-prescribing if that’s
okay. In terms of e-prescribing I think we have already reported that CMS is
partnering with AHRQ to conduct the e-prescribing pilot and we received 16
applications in response to the RFA, 14 applications met minimum qualifications
and went into the review process. The special emphasis panel convened on
December 1st and was chaired by Ed Hammond from Duke University
Medical Center. The panel consisted of a good cross section of medical
professionals from academia, industry, and the federal government and from what
I’ve heard the panel went very well and we expect to announce the aware

On HIPAA for the claims attachment NPRM even though we’ve extended the
period for comment to January 23rd we actually have already received
68 comments and we expect that probably a number of those commenters, because
they didn’t get a chance to look at some of the technical aspects may give us
additional comments toward the end of the period and that’s perfectly all
right, you can comment as many times as you’d like.

In terms of enforcement we have 121 open transaction and code set complaints
and 42 open security complaints that we’re in the process of reviewing and
analyzing and we continue to make good progress on that. And in all of the 400
or so valid complaints we’ve received so far every entity where there has been
a valid complaint has been willing to work with us voluntarily to resolve
whatever the issue was.

In terms of Medicare EDI our remittance advice percentage is that we now
have 93 percent of the receivers in production on the remittance advice which
is the transaction that goes from Medicare fee for service back to the
provider. 31 percent of our coordination of benefits are now occurring
electronically, and in terms of the eligibility, the 270/271 transaction, in
the period of November 21st to 25th which is the most
recent one we have, no, let me give you a full week, the 14th to the
18th, about 650,000 eligibility queries in that one week so that’s
really ramping up quite nicely, we’re very pleased about that.

For the NPI as of December 5th we had enumerated 233,000
providers and so that system continues to work successfully.

And for the last thing that I want to report is that CMS is in the process
of changing its emphasis with respect to how we’re going to be carrying out
HIPAA outreach in the future. We’ve done analysis of the numbers and types of
inquiries that we’re receiving on both our HIPAA hotline, the 800 number, and
our Ask HIPAA email resource and we find that the numbers have been declining
quite significantly and that many of the questions that we’re getting either
can be handled by information that’s already available on our website or the
questions are so complex and related to a particular organization that we
aren’t able to answer that in any case.

So what we are doing is basically taking down the hotline as of the end of
December and we also will be using the Ask HIPAA email resource in the same
what that OCR uses theirs which is that any queries that are gathered will be
used and analyzed periodically to determine the need for additional FAQs or
content on the website. So we’re basically transforming our outreach from a one
up kind of an approach to one where we’re going to use our resources to make
what’s on our website and what’s available generally more robust in the hopes
that we can reach more people in that way.

So we’ll be monitoring that for any potential negative impacts and we’ve
already talked to WEDI about how we can work with them to perhaps take
advantage of some of the educational activities that they already have

And that’s all I have to report.

MR. REYNOLDS: Okay, Steve?

DR. STEINDEL: Just a very quick questions, Karen, when you said you’re
taking down the Ask HIPAA hotline, are you going to just leave up the URL with
some links to the information that will be available?

MS. TRUDEL: We will leave up the email address and people can still send us

DR. STEINDEL: Yeah, I didn’t want people to feel that they were —

MS. TRUDEL: You’re right, it’s the same way that OCR does, so there’s still
Ask HIPAA at CMS.HHS.gov but the message that comes back will be that you will
not get an individualized response and that we do suggest that you look to the
content on the website, thank you for allowing me to clarify that.


MS. BURKE-BEBEE: You mentioned that the eligibility part of the Medicare EDI
and that you have gotten 650,000 queries, what does that mean?

MS. TRUDEL: What that means is that every week we are receiving and
fulfilling eligibility requests via the 270/271 in HIPAA compliant format and
we have, we brought that up I think at late summer I believe and the volume has
increased exponentially so we’re doing, and I mean 500,000, 600,000 queries a
week is really very significant and then we will be bringing up a web based
similar process that will be used primarily by small providers who can actually
send one time, these are large EDI files coming primarily from large providers
and clearinghouses.

MR. REYNOLDS: Michael.

DR. FITZMAURICE: You mentioned that there were 400 valid complaints, is that
as of like starting with a certain date, starting with the original
transactions and code sets, so like October —

MS. TRUDEL: Right back to the beginning.

DR. FITZMAURICE: All right, and out of those 400, 120 are open transaction
and code set complaints and 42 are still open security complaints?

MS. TRUDEL: Correct.

DR. FITZMAURICE: Okay, so the 400 refers to both total TCS and security.



MR. REYNOLDS: Karen, I’d like to commend CMS as a member of the industry
because I happened to be on an audio conference for the American Bar
Association where I was actually presenting and one of the questions was how
did CMS handle the contingency and I commended them for having a contingency at
the time when the industry wasn’t ready, for the way you pulled it off, you
took it away, which says basically it’s ready to go, and then thirdly the fact
that you had a sentence in there that said other entities need to do what they
need to do. So I thought that as a model of being out front, making sure that
things got done but doing it in a way that the industry could align to I
thought was exceptional.

MS. TRUDEL: Thank you, I’ll pass that along to our EDI —

MR. REYNOLDS: I thought that was really well done, really well done.

MS. TRUDEL: Thank you.

MR. REYNOLDS: Any other comments? Okay, thank you Karen.

I’d like to now turn the program over to our esteemed colleague Judy Warren,
who has the task of working on one of our subjects that we’re trying to come to
grips with which is matching patients to their records.

DR. WARREN: This is the second set of testimony that we have on re-looking
at our task in terms of linking patients to their data. With that I have
brought in two testifiers today that are approaching things somewhat similarly
to testifiers in the past but with unique changes and differences in that. In
talking with both of these gentlemen I think we have some interesting ideas and
so with that I don’t want to take up too much time, I’d like to have time for
discussion on their presentations. I’d like to present Shaun Grannis, who is at
Regenstrief and actually has the I guess privilege of working with Clem, Clem
is well known to this committee, so with that I’d like to turn it over to you,

Agenda Item: Matching Patients to Their Records – Dr.

DR. GRANNIS: As Dr. Warren said I’m a research scientist at the Regenstrief
Institute and a practicing family physician at the Indiana University School of
Medicine. One of my ongoing areas of research is record linkage and patient
matching and I’m pleased, sincerely pleased for the opportunity to provide
testimony on what I think is a very important topic. My understanding is that
the committee is interested in the technical aspects of record linkage as well
as how Regenstrief has implemented patient matching and some of those
considerations in our 11 year old operational community wide system called the
INPC, or Indiana Network for Patient Care.

I’m going to begin by describing some fundamental challenges to record
linkages as we see them and some of the proposed solutions including the
spectrum of approaches ranging from deterministic or rule based methods to
probabilistic or statistical techniques. I’m going to describe some of our key
research findings for both deterministic and probabilistic methods as well as
additional findings that I think are pertinent to the question of National
Patient Identifiers and linking. I’m going to finish up with the Regenstrief
architecture for patient matching and record linkage and time permitting
actually have some real screen shots of the system that the day to day
providers use to look up patients in the system.

So as this committee is well aware of health care information is distributed
across many independent databases and systems both within and among
organizations as separate islands with different identifiers, so within the
same hospital the same patient is in the lab system, the radiology system, the
registration system, with different IDs, so there’s a challenge both within the
hospital and across the system as well and this situation interferes with the
aggregation of information about individuals for clinical care, research,
public health reporting, etc. So it’s a fundamental challenge and if we’re
going to move as we see current initiatives, if we move to a regional and
national level this is going to be an even greater challenge.

So some of the challenges to accurate record linkage are listed on this
slide, first patient identifiers are often recorded phonetically incorrect so
the name Shaun, if I tell somebody my name is Shaun they don’t know how to
spell that name with 100 percent accuracy, so there’s phonetic recording
errors. There are typographical errors, we know that there are insertions,
deletions, transpositions, and we have a pretty good understanding of what
those rates are in the system.

Also patient identifiers are not immutable, they change, for instance,
female last names frequently change when they get married. Additionally home
address, telephone numbers, zip codes, etc., are changing identifiers.

And thirdly purposely or not patients may share unique identifiers, so we
know that family members share Social Security Numbers, we know that some
people misrepresent what their Social Security Number is so there are some real
challenges to patient linking. Certainly the ideal identifier is immutable,
doesn’t change, it’s ubiquitous, available everywhere, it’s unique to that
individual and it’s secure and there are policies around that type of
identifier. So there’s some real challenges there.

There have been a number of potential solutions proposed including a
National Patient Identifier. There are going to be recording errors in a
National Patient Identifier, even the best created identifiers have errors in
there and I’m going to share with you some of the errors that we’re aware of.
There may be some sharing of IDs, we don’t know that but that’s a potential
problem as well. There will be lost and forgotten IDs and I can’t underscore
this issue enough. I as a father of a three year old child who goes to the
doctor’s office quite a bit sometimes show up without my insurance card. And I
still receive health care, they don’t deny me health care because I don’t have
that, so clinical information is going to be entered on individuals without
these unique identifiers or these ideal unique identifiers. So these are some
of the challenges and some of the tradeoff to a National Patient Identifier.

Some proposed biometrics as a potential option, retinal scanning, hand
recognition, fingerprint recognition, voice recognition, the challenge here is
ubiquity at this point, biometrics involves proprietary technology so not
everybody has such technology. And the way one vendor represents a fingerprint
identifier in a system can be different from the way it’s represented in
another system. So even though two hospitals may say yeah, we’ve got
fingerprint identification, they may not be able to exchange that information
for identification purposes because it’s not stored in a standardized format.
So there are some issues to work through with the biometrics as well and
certainly there are related privacy concerns.

Finally there are sophisticated and linkage algorithms that folks have
talked to you about already, the Initiate(?) System and others. I’m going to
talk a little bit about the spectrum of linkage technologies that are out
there. It’s really a continuum of linkage technologies ranging from simple
deterministic algorithms to more sophisticated probabilistic algorithms, and so
in general deterministic methods tend to involve simple or exact match
calculations, they can be quickly implemented. Because of their simple models
these algorithms rely on more accurate data, they’re not as flexible, and they
do not perform as well in other datasets.

In contrast probabilistic methods tend to use more complex computationally
intensive statistical and information theoretic models. These methods tend to
be more forgiving of noisy data, errors in the data, these recording errors
that I’ve talked about, and are able to accommodate varying datasets.

So to better understand the performance of some of these linkage
characteristics, individual identifiers, we initially began by studying the
performance of a deterministic, a very simple linkage algorithm to really nail
down what identifiers work well and what do not. And in our initial
deterministic algorithm where we took two hospital registries and we linked
them to the Social Security Death Master File, which is a publicly available
file containing all of the decedents in the United States who had Social
Security Numbers, so we knew there was going to be a subset of our hospital
patients in this independent file.

What we found was that using combinations of Social Security Number, name,
date of birth and gender we were able to achieve a 90 percent sensitivity,
that’s the true positive rate, true link rate, and a 100 percent specificity
which is preventing, specificity reflects how well you protect against falsely
linking, so 100 percent specificity implies no false links in the dataset.

So one of the interesting slides, one of the, I think this slide is
informative because as we grapple with the issue of a National Patient
Identifier we looked at the performance of Social Security Number which
arguably is not a great identifier, it doesn’t have a check digit, we know
there are duplicates assigned, there are a host of issues related to it. In our
two hospitals we saw that there was a false linkage rate where patients could
have been joined together. If you use Social Security Number as the only
identifier we found that in Hospital A if we used just the Indiana subset of
the Death Master File, just looking at patients from the Death Master File who
were in our hospitals in Indiana we saw that there was a nine percent false
link rate, and I’m going to show you on the next slide why there was false
linkages. And in Hospital B which recently underwent a master patient index
cleaning we saw that there’s a lower false positive rate but it’s still finite
if we use Social Security Number as a single identifier.

This slide shows the causes of false links, first of all in our hospital
data, by the way this is fictitious data reflecting what we actually saw, but
in the hospital data we might have Isaiah Doyle and that may link in the Death
Master File to Ella Doyle. And what we found if we dig through that Ella Doyle
actually had two different Social Security Numbers, so that’s one issue. The
other is typographical errors where we may see Frank Small and Pat Jones linked
together through Social Security Number. It turns out that Frank’s Social
Security Number was mistyped in our hospital system.

And there were unexplained collisions, there were reasons, there were
patients who linked together, we dug through both our hospital system, the
Social Security Death Master File, to try to find reasons why patients were
inexplicably linked and there were just some out there that we could not
explain why. So there are some issues here and we need to keep that in mind as
we think about a National Patient Identifier.

This slide shows if you take away Social Security Number and use name, date
of birth and gender alone as the matching parameter, within the Indiana subset
we maintained 100 percent specificity meaning we didn’t have any false links if
we only linked our hospital data to patients known to exist in Indiana. When we
released that constraint and went to the full Death Master File our specificity
dropped, there were false links in the system using full name, date of birth
and gender. And the reason that specificity, the second hospital we saw the
same occurrence as well, that second hospital 100 percent specificity within
Indiana and the specificity dropped when you linked to the full nation.

This is why. In the Death Master File we have 16,000 William Smith, we have
15,000 James Smith, some of those are going to be born on the same day. So we
need to be careful about the specificity of the identifiers we’re using and so
within a geographically constrained region within a regional health exchange,
name, date of birth, may be reasonable linkage variables but as you expand
nationally to larger regions we need to be thoughtful about what we’re doing
with that data.

If we looked at replacing zip code in this same dataset, excuse me, if we
replace Social Security Number with a zip code instead, and by the way the way
we came up with these numbers is I manually linked all potential patients,
randomly selected 24,000 patient identifiers and manually reviewed them to
determine true or false positive status. So if we took this same dataset we
were using to analyze Social Security Numbers and used zip instead, and in this
combination we looked at zip code, phonetically transformed names, gender, and
various combinations of year, month and date of birth, we maintained our
specificity, we were able to protect against the false positives. But our
sensitivity went down because we know that people move so zip code data is not

One of the areas, a general comment and observation, there is a posity of
research describing the performance of difference linkage algorithms. I think
there needs to be a better understanding of what data we’re missing. We need to
come up with techniques, methods of creating gold standards that can help us
analyze and understand what’s going to happen with this data. I think we need a
better understanding in that area.

Currently what I am doing is current research looking at comparing the
performance of probabilistic linkage algorithms, which everybody knows
leverages more of the information in the data. And in the interest of time I
don’t want to go into the linkage algorithms but we know that probabilistic
linkage algorithms can accommodate the errors in a more reasonable fashion. But
again there’s very little literature documenting doing the Pepsi Challenge of
how does a deterministic, a simple rule based heuristic algorithm, compare with
the probabilistic algorithms and one of the things we’re doing is looking at
probabilistic algorithms, things used by Initiate and other vendors, and
comparing that with the performance of more heuristic fuzzy logic algorithm
that Regenstrief uses and I’m going to show that in a minute.

Again, the outstanding questions in my mind, how do probabilistic linkage
algorithms compare with algorithms developed by folks who have worked with
hospital registry data for several decades and have come up with what they
think are best practices for linking data. I don’t think that question has been
answered and I think we need to understand that.

What are the best practices? Do we know that the same probabilistic or
deterministic algorithm that we use in Indiana with good old Midwest names is
going to work as well in Southwestern United States with a higher population of
Hispanic names, or moving to the Hawaiian Islands where we have Asian names, we
have these very tiny names that are very difficult to use in patient linkage.
So what are the best practices? We need to understand those as we build out the

Very basically just to sort of get your mindset around probabilistic
linkage, typically what happens in probabilistic linkage is you form potential
record pairs, the two files that you’d like to link or the single file that
you’d like to link together, you form potential pairs through them. Each record
pair is assigned a score and in the interest of time I’m not going to go
through the scoring algorithms but those scores ideally would form a bimodal
distribution. So lower scores on the left would represent non-links, higher
scores on the right represent true links. The challenge and where much of the
work is being done in probabilistic linkage right now is answering the question
which are the true links in the middle area and so typically what happens now
is in this area folks at a lower threshold below which they’ll simply say
everything below this floor is a non-link and an upper threshold above which
they say this is a true link, and these middle pairs are relegated to either
human review or just simply assigned to non-link status.

One of the things we’ve been working on is being able to set a single
threshold that accurately predicts what your true positive and false positive
rates are going to be. Some of the early work we’ve done with looking at
accuracy and false links, I put this up there to compare our deterministic
algorithm which maintained 100 percent specificity, no false links, with the
probabilistic algorithm using the same data. The probabilistic algorithm
clearly showed increased sensitivity, there were more true links found with the
probabilistic algorithm but at the expense of slightly decreased specificity,
we had some false links that were included in the system.

And what some folks with their probabilistic algorithms do is they actually
perform the initial probabilistic analysis and then incorporate in rules to
defend against the potential false links afterwards. So it’s rare that you see
a truly pure statistically independent probabilistic algorithm that doesn’t
incorporate some sort of rules after the fact.

What are these rules that help improve sensitivity but protect specificity?
Those are the types of questions I think we need answered as we move forward

So with that said I’m going to transition now to the Regenstrief algorithm
that we use in the Indiana Network for Patient Care. The INPC has been in
existence since 1994, it consists right now of 17 hospitals who share data
across the community. That data is linked through a common global patient
index. By way of illustration we have some potential data sources that we
actually have in the INPC today so we have health departments sending
immunization registry data into our system, and I have an electronic medical
record system in my office, we both administer shots and we have a global
patient index that manages this information so we might have demographic
information from both of these data sources, the demographic information is
slightly different, there will be different identifiers in each system, there
may be errors in those identifiers.

What we do with the global patient index is anytime a new patient
registration occurs or a result flows into the Indiana Network for Patient Care
it’s compared against the global registry. If that patient is not known to
exist in the system we will take the new identifiers and add it to the system.
So in this example our global patient index is aware of the patient at this
point. Now I’ll go into more details about how we’re certain or aware of
patients in a moment but as a general overview we’ll take that new patient
identifier and add it to a list of identifiers in our registry. And the same
happens for my EMR as well and so we maintain a list of identifiers that point
to different data sources within our system.

What we do is we take all of these various sources and again all
registration messages, whether they come from hospitals, whether they come from
reference laboratories, whether they come from ancillary services such as
radiology systems, those messages are compared against the global registry to
understand what source they came from and where they belong and where they need
to be stored in the system for late look up.

And the global patient index, if you want to build a regional health care
information system at its most fundamental level you need to identify two
things, you need to be able to identify patients, you need to be able to
identify concepts, we’re talking about patients today and the matching forms,
one of the key foundations to any system like this.

So I want to talk very briefly about some of the key features to our
algorithm and then I’m going to go into more detail. There is one record per
assigned medical record number per institution, so what that means is if a
hospital has two medical record numbers for a single patient there will be two
records in our global patient index. We will know about the unique identifiers
from each of these hospital systems.

Our algorithm implements the concept of a patient group as I just showed
you, we keep track of all of the different identifiers that the patient is
known as in each of these institutions, so we keep a patient group, a group of
equivalent records in our system.

We match using Social Security Number when available and we know that Social
Security Number is present on approximately 70 percent of our registration
records. We use patient name, birth date and gender. And we use string
comparator algorithms, an algorithm called the longest common substring which
looks at, which can take two record strings, compare them and determine how
much of one name or text string is in the other, and we use that for nearness
matching as well. We do also allow for minor transpositions in the birth date

So here is a sample, again this is fictitious data representing what we
might see in our patient index. We have an assigning authority, we know what
data source that came from. We have a global patient ID which I think on your
slides indicates this is an internal number used for indexing purposes only, we
don’t share the global ID with any of our institutions, so that’s used only for
indexing purposes only.

Then we have the local patient number, that identifier that each of the
institutions or data sources knows their patient by. We have of course patient
name, date of birth, gender, and we also have Social Security Number, I
apologize, I neglected to include that in the slide but Social Security Number
is there as well when present.

Now to the human eye we can look at this list and recognize most of us would
agree there are two patients here, we have Alice and Ralph. Our algorithm
though may treat them as three different patient groups because that second
record for Ralph is too different from the others, so Ralph’s birth date in
that red record is off, his first name is substantially different from the
other two, so we may keep that separate. And one of the design principles we
built, that Clem built into this system was we want to fiercely defend against
the false positive, we don’t want to link patient data inappropriately so the
John Smiths of this world, you’d hate to walk in one day and have your doctor
tell you you have HIV inappropriately, that’s of course a worst case scenario
but these are the issues that you need to think about.

So we believe at Regenstrief that a false positive is a very bad thing and
we want to defend against that fiercely, and so part of the reason we have a
heuristic algorithm is we can get our minds around what types of matches we’re
going to see with our algorithm. In a probabilistic algorithm, and I don’t want
to misrepresent myself, I’m very interested and believe that probabilistic
algorithms are the way to go, but in probabilistic algorithms there is a
potential for an unpredicted link to occur because we can’t anticipate all the
possible things that people are going to enter into their system, particularly
we have 17 hospitals, multiple registration sites, we can’t anticipate what
they might put into the system. So we’ve designed a system that we can at least
understand and feel comfortable with to defend against the false positives. And
I made the point that global ID is for internal indexing only and it’s not
publicly exposed.

This is the algorithm, I’m not going to belabor this but to point out a
couple of points. When a message comes in, a registration message or a results
message comes into the Indiana Network for Patient Care we look at the hospital
identifier and look to see if we have that hospital identifier in our global
registry. If we do we look to see if the key data elements, that the Social
Security Number, name, date of birth or gender have changed. If so we mark that
record to be re-matched and several times a day we want to run a batch process
that re-matches all of the records marked as being changed.

So let’s say we have a patient who comes in who, Mary Smith gets married and
now is Mary Jones. So her record, that registration message would flow into the
system, we’d recognize that her last name has changed, she’d be set, flagged to
be re-matched, we’d ask the question is name and date of birth present, let’s
say in this case it is, is the gender male or female, it is, so we keep moving
forward. Is the global ID present? Do we know about this patient from previous
times? If so are there multiple patients in this patient group? If there are do
all of the members of this patient group still match with one another? If all
members still match with one another then nothing is done with this record,
it’s that particular registration or result is tagged to this patient group.

Now if for some reason that new record did not match with the current
existing patient group that current existing patient group gets broken up. If
for any reason all of the, well, let me back up. If that patient doesn’t match
with all the existing records in that patient group it will receive a new
global ID.

There are instances, and again, I don’t want to get into too many details
but there are instances where a patient group can be broken up because of the
transitive property, you can potentially, if A is equal to B and B is equal to
C is A equal to C? We don’t think, we prevent A from being equal to C in our
system and so we’ll break up patient groups.

The matching is done, if Social Security Number is present we allow a
longest common substring of 60 percent on the patient name and the date of
birth also has to match with transpositions allowed. If no Social Security
Number is present we require exact matching on name, date of birth and gender.
And name can be phonetically transformed so what we do is we use something
called the NYSIIS, the New York State Intelligence and Identification System,
phonetic transform, the Census Bureau back in the ‘80s did a study of
different phonetic transformation algorithms and the NYSIIS algorithm seems to
have reasonable discrimination, much better then Soundex(?) does and so we will
phonetically transform names as well. So that in a nutshell is the algorithm.

In development our engineers have reviewed potential problem matches so when
we were developing this algorithm there was a great deal of human review that
went into testing it, we periodically review matches to make sure that we’re
not seeing any false matches in the system as well.

So I want to show you what a practical system looks like, a practical
example, this is what the actual system looks like so if a patient shows up in
emergency department in one of the institutions in the INPC the emergency
department physician can pull up, can search for that particular patient and
can type in name, date of birth, parts of Social Security Number to search for
that patient. Once that information is entered potential hits are shown and one
of the keys to our system is we only display patients who are known to have
visited that institution, so if I’m an ED physician and a patient shows up at
my emergency department I have rights to search for that patient, I don’t have
rights to search for patients at another hospital. So patients only show up on
this list if they’re known to have visited that hospital, that organization.

Once I select a particular patient I’m then presented with the different
institutions that that patient is known to have data at and you can see in this
slide, again this is an actual look up, we see that our matching algorithm has
actually detected that this patient has two medical record numbers in one of
the hospital systems in Clarian so we’ve actually effectively duplicated one of
the hospital’s records. We have records from Clarian which is a hospital
system, Community which is a hospital system, Marion County Health Department
and Wishard Hospital for this patient. I can choose one or all of the
institutions for this patient’s data, once that’s selected, this is simply an
example of how we aggregate data so this patient has had bilirubins performed
over a period of time at three different locations so our system is capable of
integrating that data together and put it together chronologically from
different locations.

So I’ve presented a lot of data today, I recognize that perhaps its been
technical, but I also want to take a step back and just suggest or present some
thoughts to you on this issue. I don’t think the National Patient Identifier is
a bad thing, I think a National Patient Identifier would help patient matching.
But the question is we need to balance short term interests with long term
interests, I don’t think we’re going to be rolling out a National Patient
Identifier over the next three years and if we want to get work done we’re
going to need to rely on patient matching algorithms.

But unquestionably information theory tells us if we add an identifier
that’s ubiquitous and unique we’re going to do better with matching. An opt in
policy for a National Patient Identifier could easily be incorporated into
probabilistic matching algorithms. Matching algorithms are still going to be
necessary for the long and short term because patients are going to show up
without their identifiers, identifiers are going to be mistyped, and we’re
still going to need that look up ability.

The second point that I’ll leave you with is because this is so fundamental
to local, regional, national efforts we need to better understand linkage best
practices and get those out there and published. What fields work well in what
regions under what circumstances, haven’t gone into the area of string
comparators, there are a number of methods being looked at right now but no
clear consensus other then to say probabilistic matching seems to work. We need
to dig a little deeper there.

We need better methods for gold standardizing our datasets to understand
what links we’re actually missing in these linkages. It’s very hard to actually
characterize what sensitivity is because you actually don’t know the total true
links that might exist between two million record data files, it’s too hard,
resource intensive, to actually try to find all of those links. So we need to
think about those issues as we move forward trying to understand these linkage

So with that I’ll turn it back to you folks.

MR. REYNOLDS: Okay, I’d like to open it for questions. Marjorie and then
Michael and then Stan, this would be a full service questioning period here, so
we’ll just go around the table. Marjorie, why don’t you begin please?

MS. GREENBERG: Thank you very much and please say hello to Clem for us. I
was just attending the annual meeting of the National Association of Health
Data Organizations and there were several RHIOs and other related organizations
presenting and I think every one of them mentioned this as a major challenge
and requirement for them to really move forward. And what I wondered was since
you’re obviously, I know Regenstrief is a leader in this area and you are
specifically focusing and an expert in this area, is there any method or any
mechanism really that you’re aware of right now that is systematically allowing
all these different RHIOs and organizations to be sharing information on best
practices, to be working together on this? Although I can see advantages to
letting maybe 1,000 flowers bloom to some degree the idea of everyone having to
kind of reinvent the wheel on a very similar problem doesn’t seem sensible
either. So is there some mechanism for this as far as you’re aware of?

DR. GRANNIS: Well we are working with the Boston folks and Mendocino County
in California on the record locator service. Each organization is using a
slightly different algorithm so I believe the folks in Boston are using the
Initiate System, we’re using ours and Mendocino County has a home grown system

The record locator service at this point is more about exchanging data once
it’s found within an individual institution but to date to my knowledge there
hasn’t been a lot of exchange on okay, if you’re using algorithm A and we’re
using algorithm B what implications does that have for matches between us and
if Boston is using algorithm C what implications does that have for matches
between them. That type of exchange I think needs to occur.

MR. BLAIR: It needs to what?


MR. BLAIR: You said exchange, what do you mean by that?

DR. GRANNIS: She asked about information exchange about best practices and
what works well in one community.

MS. GREENBERG: And then how these might work together because as you said
the idea of a national uniform approach right now is not in the near horizon
and yet as we know patients can go anywhere from anywhere.

MR. BLAIR: When you say exchange I’m still not sure what you are saying when
you talk about exchange, the patients in each of those areas are in different
geographic areas, are you talking about discussion, are you talking about
clarification of whether or not the algorithms will cause problems if these
people travel, what do you mean by exchange?

DR. GRANNIS: We were talking about exchange of best practices so perhaps a
dialogue of ideas —

MR. BLAIR: Thank you.

MR. REYNOLDS: Steve, you had a follow-up to this?

DR. STEINDEL: Yeah, just a follow-up comment, Marjorie, I think if I can
interpret what you’re questioning is there is a tremendous need for a national
mechanism for exchange of this type of information, not just record locator
services but also it’s the best practices for the emerging Regional Health
Information Organizations. And I think there’s several groups that are starting
to do that, I know HIMSS is trying to position itself as the repository for
this type of information exchange.

Mike, isn’t AHRQ trying to put together some best practice information also?
Those are the two that I’m aware of and there may be others.

DR. FITZMAURICE: Yes, we have a national resource center that is pulling
together best practices, problems, and lessons learned from the six states that
we have funded.

MR. REYNOLDS: Okay, Marjorie, was that your only question? Okay, Judy, did
you have one?

DR. WARREN: I have several. You had mentioned that one of your challenges
was last names and I was wondering in your database did you run across any
cultures where designating a family name may not always be the last name?

DR. GRANNIS: Yeah, we see Hispanics where typically, or not uncommonly
children may take the mother’s name as well and so we’ll see a lot of
hyphenation and reversing of names. In Asian populations the concept of last
name, first name, middle name doesn’t fit well with ours so we see a lot of
transposition there. The longest common substring algorithm though accommodates
that so it will accommodate very easily first name, last name, middle name

DR. WARREN: I guess the question I had is especially with Asian names, many
times you don’t know which is “the family name” or the last name, and
which is the name of the person because they switch them trying to become more
part of this culture. So have you found any problems with that or trying to
untangle that piece?

DR. GRANNIS: That specific piece no, the issue of ethnicity though does come
up and there’s even some talk, some consideration, is there a way to fairly
reliably identify ethnicity of names and include that as a parameter in the
linkage algorithms. So if you determine this name to be short and of Asian
descent you may want to weight that much lower because you know there are
issues with transpositions.

DR. WARREN: Okay. And then the last question I had was a comment, actually
it’s a question for you based on a comment that you made, wanting to have some
way to evaluate the goodness of the algorithm, do you have any suggestions for
what that might be or how we might go about setting a standard for evaluating a

DR. GRANNIS: Well some of the work we’re looking at at Regenstrief right now
just to be very explicit is we have in our global registry over five million
records with representing over a million and a half patients. We want to do
some comparison of algorithms on this data. The challenge we continue to run
into, the wall we run up against is how many true links are in here and how do
you accurately characterize that.

There’s been some conversations with Mendocino County as well about taking
their algorithm and bringing it into the fray and comparing how many links do
you find, what is your performance. And I think we need some sort of Pepsi
Challenge and in fact the CDC put out a 500 record immunization test set that I
think 500 records is too small and it doesn’t directly apply to patient
registration messages for developing a global patient index, but something
along those where you can create a large gold standardized record set against
which you can compare different algorithms I think would go a long way.

Now there’s going to be a lot of debate and discussion over what is the
right dataset and what characteristics and features do you assume in that. And
I know that Initiate System has developed a modeling system to sort of try to
come up with a way to model that as well.

DR. WARREN: Okay, and then I really want to push you and if you don’t want
to answer this fine. Would you see algorithms being certified by some standards
body or something of being good so that we can, because one of the things that
interested me in your presentation is some of these algorithms may not be
interoperable with each other and so if you’ve got patients identified with one
algorithm that are not with the other then we’re back with they’re unlinked
with their data again.

DR. GRANNIS: The certification question is always a tough one and in some
ways more of a cultural political process then a technical one. I think getting
the best practices out there I think will help us a long way towards that end
and in fact even if a standards body like HL7 or HIMSS or somebody else wants
to take that on as advocating for particular algorithms, I guess the degree to
certification with which I’m uncomfortable, I think somebody needs to carry
this flag.


MR. BLAIR: Shaun, thank you very much for trying to give us at least an
introduction to understanding the alternatives we’re facing. I’m struggling
with this and to be honest with you there’s a lot of ambiguities here so I’m
going to paint a little bit of a picture of where I am so that if my initial
assumptions are not correct then start off by correcting my initial
assumptions. And they’re going to be very general and very crude.

It seems like your statement that we’re unlikely to have a National Patient
Identifier in the near future I would agree with, I think just politically
that’s just very unlikely to happen, so I kind of put that aside.

The next piece is that whether probabilistic or deterministic, whether one
is better then another, it sounds like probabilistic has gotten to a good
enough level in local areas where it’s workable, it’s working reasonably well.
We could talk about the algorithms but trying to make sure that they’re
consistent from location to location.

Here’s the thing that I’m worried about, I wasn’t worried about the first
two assumptions unless you tell me that I’m wrong on that. The area that really
worries me that I can’t figure out how we’re going to deal with is infectious
diseases, response to national health emergencies, and the portion of our
population that’s not part of a health plan where they don’t necessarily want
to use their real identity. So there’s a group of folks, let me talk about the
infectious diseases first, where we have folks that may come from other
countries and they work in one area and then they migrate to another area and
to a third area and to a fourth area and they’re not covered by health

We don’t have good data on them in the first place but it isn’t just that
they have, they need health care individually, it’s that they may be part of
transmitting infectious diseases so we can’t ignore their health care needs and
their health care needs may have an impact on all the rest of us that may be
enrolled in a health plan. So it becomes really important. And it seems like
that, that’s one area where you just don’t have a lot of good data and you’re
not even sure the person is going to identify themselves for who they really

The other piece is the fact that we may be traveling and if there is a
national health emergency or bioterrorism then the issue of going from one
geographic part of the country to another or from traveling from Europe or
Japan or whatever means that even a national system for identifying a patient
may fall short of what we need to do.

So then I start to wind up saying well it sounds like we need biometrics to
really identify somebody but you point hat biometric systems don’t have all the
same standards.

So I’m sort of left in this area of saying that we’ve got a number of really
difficult hurdles to go through, do you see a path, even if we don’t have the
solutions all in place now, where you can see step one, step two, step three,
that could be done within the next two to three years where we could converge
and really address these types of issues?

DR. GRANNIS: I think the record locator service initiative can be one source
of moving forward with accuracy, this is the first opportunity we’ve really had
in this country to share information and test different regions’ matching
systems with one another to understand how they perform together —

MR. BLAIR: Now you say the record locator service —

DR. GRANNIS: The Markle Foundation is sponsoring a record locator service
initiative between Boston and Indianapolis and Mendocino so that provides an
opportunity to look at data exchange between three separate regions and
understand how these three different systems might work together.

I don’t think we’re ever going to arrive at patient identification nirvana
unless we use DNA coding as the ultimate identifier and standardize how we
encode DNA for each individual. Short of that we are never going to get 100
percent sensitivity and specificity because I was just talking with Ken
Mandle(?) out in Boston yesterday, he uses a thumbprint identification system
to get at supplies in his emergency department. He repeatedly tries his
thumbprint three times and then enters his override password to get in so there
are problems with the system. Real life gets in the way of idealism all the
time, that’s really what we have to deal with here, how can we make it better
and continue making it better I think is the question I ask myself and I think
part of the pathway is doing the experiments with record locator services and
understanding what happens with this record exchange, understanding what the
accuracies are and moving forward.

I think that we’re going to see a combination, as biometric technology
emerges and evolves we may see that come into the fray but again I don’t think
that’s something that happens anytime soon. And we see all of this activity in
health data exchange and folks wanting to exchange data and the tool we have
right now are record linkage algorithms and probabilistic algorithms are the
ones that seem to be working. So that’s our tool right now, we’ve got to
leverage that tool and make it as good as we can while we continue to
understand these other issues.

MR. BLAIR: Let me give a little feedback, I think you’re saying number one
we need to wait until the record locator service gives its findings and reports
between Mendocino County and Indiana and the Boston area, you’ll learn some
things there. But does that test include things like migrant workers?

DR. GRANNIS: But the migrant worker question maybe something slightly
different, let me just walk through that. First of all migrant workers if they
provide accurate identifiers and get into the system, as long as they’re
providing some sort of reasonable identification we should be able to track
them. This type of system that we’re talking about with record locator service,
we’re not necessarily injecting use cases, we’re using real live patient
registration information —

MR. BLAIR: Aren’t you still, I mean this is going to obviously going to be
helpful and we’ll learn a lot from this, but will it help us to understand
folks that don’t want to be identified?

DR. GRANNIS: If they don’t want to be identified they can just lie about who
they are.

MR. BLAIR: Okay, so we don’t have anything other then biometrics or DNA to
get to that and that’s the reality —

DR. GRANNIS: — question about overriding a patient’s desire to —

MR. BLAIR: Pardon?

DR. GRANNIS: Your question is about overriding a patient’s desire not to be
identified —

MR. BLAIR: I’m thinking of nationally and globally, yeah. Okay, thank you,
you did help me understand it a little bit better.


DR. STEINDEL: Well thank you, Harry. I’d like to make a comment first on
Jeff’s last set of trends and I was going to look up Ms. Solomon’s first name
but when we were talking about the personal health record in the NHII Workgroup
we heard from a representative who’s providing personal health records to
migrant farm workers in the California area and basically these people will
identify themselves, they realize the importance of health care and so I’m not
totally certain about the magnitude of Jeff’s problem. I do agree it exists,
I’m just not certain of the magnitude of it because as you point out once they
identify themselves they can be tracked. So that was just a comment in response
to Jeff.

But I have I think it’s mostly comments and I’d like you to react to them
because my sense of what’s driving the heuristic algorithm, the approach that
you’re talking in Regenstrief is the conservative nature that you’re
approaching matching patients and that you want to maximize the true positive
rate and be assured that you’re matching to patients, when you do match
patients they are correct. And that seemed to drive a lot of what you’re doing
in your heuristic algorithm.

Some observations that I’d like to make is first of all when you were
talking about your initial test with Social Security Number, what I was very
impressed with was how the statistics changed when you went nationwide and the
decreases in the rates. And when we’re talking about a nationwide health
information network, those are the statistics that we need to look at, not the
local regional statistics when we’re trying to match patients.

The example that you gave where you had these two sets of patients that may
be identical and you said you created three patient groups with them and you
pointed to I think it was Ralph Stockwell, and the difference in the two
patient records for Ralph Stockwell was one he was entered as RJ and with a
birth date that had transposition. And we might consider that in a less
conservative sense a match, so that’s part of what’s driving what you’re doing
there and the decisions that you’re making and I think it’s subject to debate
whether that is a correct decision or not.

DR. GRANNIS: Absolutely.

DR. STEINDEL: The other thing that struck me was the example that you gave,
registration information and a marriage name change, and this person may have
been seen by N institutions and now has, may have had one consolidated patient
record under maiden name. And they may have just gotten married and gone to see
one institution and you got one registration change but now you’ve created a
disjoint in the records and yet you do have information that you might be able
to send to the other people to cause this disjoint to go away but you’re
choosing not to, which is another part of the conservative approach.

The third thing that struck me was one of the things that we want to do with
a nationwide health information network is to allow people to be able to find
the locations of people in other institutions and yet when a person shows up in
your emergency department which is generally the case that we’re talking about
they’re limited to see only the records of patients who have been seen in that
institution —

DR. GRANNIS: That’s who you have to choose from in order to say this is,
this patient is here, once I select that patient —

DR. STEINDEL: That’s the point that I’m getting at, what if that patient had
never been at that institution?

DR. GRANNIS: Then why are you looking up their data?

DR. STEINDEL: They’re at your emergency room and you want to see if they
have something —

DR. GRANNIS: Well you’ve got that registration message, that opens it up for
you. The first time they’re there you can see them.

DR. STEINDEL: I don’t think we fully understand the scenario then.

DR. GRANNIS: When the patient shows up at an institution you have rights to
see that patient’s data, they will show up in a look up list. The John Smith
from institution B, you don’t see them in their look up list, that’s the point
I was making, we’re not overly disclosing information —

DR. STEINDEL: I think that was a little unclear in the presentation and
thank you for clarifying it, that’s very good.

The last comment I’d like to make is with regard to certification in which
you responded in answer to Judy’s question, as part of the ONC contracts one of
the contracts involves certification and the first year is ambulatory care, the
second year is institutional care, and the third year is networks. And I’m
reasonably certain ONC is going to ask for certification of patient matching so
I think it is coming and on the horizon. And like I said this is really not in
the form of questions but more comments and if you have any response to them
I’m sure we’ll be happy to hear it.

DR. GRANNIS: Absolutely. On the first point about the conservative nature, I
completely agree with you, but you had said that we want to maximize true
positives, actually our driving force is minimizing false positives, that’s the
driving force here, we want to fiercely defend against falsely linking patient

Your second point, looking at going from regional to national, for national
purposes absolutely, we want to look at the national data. But I think it’s
important to realize that Regional Health Information Organizations, we’re
using name, date of birth and gender as one of our matching criteria in
Indianapolis and it works because most health care is local, we know that less
then five percent of all ED visits are from outside of state. So when you’re
constrained geographically things work so I think that is an important point to
understand and I think we might both be saying the same thing that what works
locally may not work nationally but it does work locally so I think that’s an
important point to remember.

I appreciate your comments —

DR. STEINDEL: Yes, we are saying the same thing in regard to that.


DR. HUFF: This was a great presentation, I appreciate you coming. So a
couple of things, just to be clear, when you do have errors and I understand
exactly where you’re coming from, if you falsely match somebody the clinical
implication is I could be making decisions about this patient on data that’s
not really their data. What you have there is basically somebody else’s data
that looks like it’s my data so I make a false judgment medically about what I
should do for you based on data that’s there.

The other error is bad as well, in other words if I don’t match you when I
could have matched you what it means is my data is on two different records and
so data that’s mine is now not available so you don’t have bad data but data
that’s available is not used in the decision and so that’s the error on that

I mean we have this natural sort of instinctive thing of saying gee, we
never want data, we certainly never want to be mixed up where we’re thinking
this data belongs to this patient when it’s really not their data. Do you have
an idea of the cost that it causes the system when you do the other kind of
error? Basically if my data is there and it’s not available for the decision,
that doesn’t seem medically as bad but have you tried to quantitate in any way
what it means when, so you could actually in some dollar sense trade off the
difference between sensitivity and specificity in these —

DR. GRANNIS: Sure. Mark Overage(?) in 2002 published a study looking at, it
was a randomized control trial of health information exchange looking at
emergency departments that did have access to the complete data record and
those that did not. The hospitals who did have access to more complete data
actually, in one of the institutions showed cost savings, the other institution
was equivocal, certainly not more expensive. So there are clearly some cost
savings to be had by reducing duplicate tests and other sorts of measures. It’s
a hard question to answer, I think we clearly have hints and good arguments for
why it does save money.

DR. HUFF: The second question, I think we’re in a fair consensus that in the
near term we’re doing what we can do and I would like to get your perspective
on if we assume we had five years or maybe even ten years to do something what
would be the right thing to do? And the kind of context that I’m putting that
in is thinking now of convenience to patients, the speed with which I can
register them into the system or recognize them at a visit in an ER or even for
routine visits, how that identification might link to bar coding and other
kinds of automation that would speed and make more efficient health care. What
are your thoughts on the gains that could be made in a longer term and what
would in fact, what might be the most promising way to recognize those

DR. GRANNIS: Sure, sure. The third rail here I think is the National Patient
Identifier, I think that, the privacy concerns which I don’t have good contact
with so I can speak from the technical side and from the operational side
having a National Patient Unique Identifier with the check digit and I’m
belaboring that point because the check digit will make a big difference if we
ever see it. But having that National Patient Identifier will speed
registration, will improve the accuracy of probabilistic linkage, it will
improve the aggregation of patient data. I have no doubt about that. But the
challenge, the uphill battle is what are the social and political concerns
surrounding that —

DR. HUFF: And just one question on it, I mean one of the things that strikes
me about this is that, I mean if you make the analogy to my credit card, I mean
I feel more secure going to the local gas station and swiping my card then I
would if every time I went to get gas I had to give my name, my address, my
birth date. I mean it’s sort of like the concern that everybody has in fact is
obviated by the fact that I have to give the information I’m trying to protect
every time that I receive service, whereas if I gave that once I’m now working
with an anonymous number and only the people that I care to give my name to
even have to know my name or anything about where I live or anything. So it
seems some of this doesn’t necessarily have a rational basis in actual

DR. GRANNIS: Agreed. I don’t think that it’s an either National Patient
Identifier or matching algorithms, I think there is a strategy to move forward,
I think if somebody can come up with a powerful story about why an opt in
identifier initially makes sense and if you can gain momentum through saying
look, we’re protecting and it’s safe, we’re minimizing exposure and disclosure
and look how much, look at what we can do with this new ability. I think the
story needs to be told so that people really understand about it. I think the
problem with the privacy concerns, well, I don’t think we’re ever going to get
away from privacy concerns but some of the other uncertainty around it could be
obviated by telling a clear story about the value of that identifier. It’s not
an either/or proposition.

MR. REYNOLDS: Stan, you okay with yours? Steve, you had a comment on that?

DR. STEINDEL: Yes, just to address one of the comments that Stan made about
feeling safer about using his credit card, I think we’re all aware another one
of the ONC initiatives is developing some what they’re calling breakthrough use
cases, we’ve heard about that repeatedly, and the American Health Information
Community met last week to discuss those breakthrough use cases. And one of
them that may appear is some initial looks at the personal health record and
the thought at the Community discussion was the personal health record is too
poorly defined and too obtuse right now to take the whole thing on but they
wanted to include some aspects of it to get it rolling. And one aspect that may
be included in the breakthrough use case is something that they’ve loosely
referred to as my registration information which basically is what you’re
talking about, is the identity and perhaps insurance information, stuff that
you routinely give whenever you have medical care presented in the form of some
type of locatable and transferable record. So we may be looking at some time in
the next year or two some test scenarios that actually have the credit card
that Stan is talking about, that we can swipe into the gas pump to get our
gasoline for health care.

MR. REYNOLDS: Michael.

DR. FITZMAURICE: Before I ask questions I’ll note that at one gas station I
go to I put my credit card in to get gas and it asks for my zip code. Well I
put in my work zip code, wouldn’t give me gas, wouldn’t recognize my credit
card. They must be referring to my name so I had to wait and recycle, put it in
again, put my home zip code, it gave me gas.

My first question, just some clarifying questions, I’m not sure that I got
it right but I think I did. On slide six where you talked about deterministic
and you have a 90 percent sensitivity that is a true linkage rate, I assume
that part of that is bad data as you explain in the next slide, is it also true
that some of those didn’t have the complete data? Or was this all bad data? All
the Social Security, name, and birth date were there, they were just incorrect

DR. GRANNIS: Correct.

DR. FITZMAURICE: Versus some of them were missing. Did it include the
missing ones?

DR. GRANNIS: This particular dataset we created by matching on Social
Security Number alone so this was the blocking variable for this dataset —

DR. FITZMAURICE: They could have been missing birth date —

DR. GRANNIS: Yes, absolutely.

DR. FITZMAURICE: Just a clarifying question. The next one, you linked to the
Social Security Death Master File, were all of these patients that you looked
at dead?

DR. GRANNIS: All of these patients that we looked at had a Social Security
Number in the Death Master File.

DR. FITZMAURICE: All right, so you can be in the Death Master File without
being dead.

DR. GRANNIS: Generally no but the NTIS who puts it out puts disclaimers on

DR. FITZMAURICE: So you were looking through, your initial selection was
from patients who were no longer living you thought.


DR. FITZMAURICE: That wasn’t clear to me, maybe I missed it.

I’m looking at the slide with all the boxes and the diagrams, the
algorithms, what came to my mind is that in the studies you had done you said
all right if we use just Social Security alone, use just zip code alone, have
you done a logistic regression where you look at the probability of a match as
a function of the Social Security Number matching, the data of birth, the
gender matching, to find out which ones give the strongest marginal

DR. GRANNIS: Yes. Actually the probabilistic linkage algorithms actually
generate, the match likelihood for each identifier is actually equivalent to
the weights you might see in a linear regression.

DR. FITZMAURICE: Which ones have the highest weights? Which are the most

DR. GRANNIS: Social Security Number.

MR. REYNOLDS: Steve, you had a comment?

DR. STEINDEL: Just a very quick comment, Mike, Social Security Death Master
Record does have some live people in it, my wife used to work with it.

DR. FITZMAURICE: And we know she’s alive right?

DR. STEINDEL: No, no, it wasn’t that she was in it but she worked with
retirees and used to search to see if the patient had died and used it for that
purpose and she knew that people were coming up that hadn’t.

DR. FITZMAURICE: Kind of an honorary society I guess.

Fourth question. In the example you gave, it’s blacked out, probably slide
26, 25, 24, where you have patient records displayed by hospital, Clarian,
Clarian Community, Marion County, and the same question I think that Steve
asked, when does the emergency department physician get to see that and is that
after the physician has entered the data and you see was there a match of the
patient within your emergency department? No, does it get to see other
hospitals in order to look for blood type or something else that may be in a

DR. GRANNIS: They get to see this hospital list after they’ve selected the
patient from their hospital that has been seen at their hospital. So you have
to come in to my institution before I get to look for you at other

DR. FITZMAURICE: I was concerned as Steve was that there’s information out
there you’re not getting to, you can get to it if you identify a patient.

Last one, and also one that Steve alluded to, is you look at mistyped in
Social Security Numbers for example and maybe dates of birth as well, as you
add those things up you could do a study, or at least express opinions about
the value of a machine readable card for example, whether it be bar card or be
a smart card, in reducing or eliminating those typing mismatches and that could
add some value to people who are pushing for national bar coding or national
patient identification card. So if not quantitative at least you can, well you
have something quantitative to say about maybe the upper bounds of value saved
by having, by avoiding those typing errors.

Last point and that is, and also what I think Steve was referring, I always
wish I could go before Steve sometimes —

DR. REYNOLDS: Michael, the next speaker I will assure that happens. The
chair does have that liberty.

DR. FITZMAURICE: A voluntary national patient ID, I was on a Markle
Foundation linking committee where we looked at it and decided that politically
it wasn’t acceptable to enforce such a mandatory patient ID but it would be
feasible to have a service where I could register myself and my family could
register themselves and receive a national patient ID that could be linked to a
national master patient index. Is that feasible from your point of view? It
doesn’t have to be associated with a card or anything but obviously a card
would reduce those typing errors, but having something voluntary would be an
alternative to I give out all my information as Stan had to give out to get his
gas, name, date of birth, just information we are trying to protect. Do you see
that as a feasible alternative to this linking up all of these regional master
patient indexes with national master patient indexes, which means all my four
variables go across the country?

DR. GRANNIS: Creating that list I think is technically feasible, the devil’s
in the details of implementation, how do you distribute that and ensure that
that information is available, is it by card and do vendors and regions buy
into your initiative to create this. So I think we need to be careful about 15
National Patient Identifier initiatives —


DR. GRANNIS: Absolutely technically feasible, yes.

DR. FITZMAURICE: Thank you. Oh, and by the way, it was a great presentation,
it put a lot of perspectives on it and you got down to the nuts and bolts of
the data, not just the algorithms. Thank you.

MR. REYNOLDS: Okay, Karen I think you’re, oh you pass, any, I know Judy, I’m
going to take a moment now since we went around once. Just quickly, one of the
things that I think has struck me, and a lot of us on this committee are on
multiple committees within this process, is the whole idea of linking this to
the EHR vendors, to everything else that’s going on out there and whether or
not I guess to me this ability to come up with the base search criteria so that
as people are developing systems and other things they at least have those as
identifiable indexes so that when people start accessing records and I think
that’s probably to me the thing that needs to happen as quickly as possible if
we can get some kind of a standard view of what those would be, even as Michael
put it even if you took your base ones but then you had others that would make
it more sensitive to use your term. Because otherwise everybody is running
together right now, the problem is we’re not necessarily running in the same
direction and we’re not necessarily linking up as a group. Does that make

DR. GRANNIS: Yes, I agree with establishing some fundamental understanding
about what elements we’re going to be using in the algorithm.


DR. WARREN: I actually wanted to follow-up on a question that Stan had in my
mind thinking clinically about what we’re doing with matching patient data
because his concern was a patient arrives in the ER and you don’t have access
to his information and so you make choices based on lack of data. What I was
wondering about is have you looked at the reverse of that? A patient shows up
in your ER and you inappropriately match them and so you now put the current
data that you collect on them in someone else’s record.

DR. GRANNIS: We haven’t, that happens so infrequently to our knowledge that
there’s no way to study it formally. We do routinely again review patient
matches and look through for suspect, shorter names, known challenging matches,
to make sure that we’re not doing that.

DR. WARREN: So it’s just a review on your part on how you’re doing that.

DR. GRANNIS: Right. And we tell folks we’re going to do as good a job as we
can at matching this and it’s up to you to make sure that you make decisions
that are best for the patient in the context of, in medical school they told me
treat the patient not the monitor, so there’s that implication there as well.

MR. REYNOLDS: Okay, Jeff and this will be our last question for right now.

MR. BLAIR: Shaun, when will your report that covers Boston, Regenstrief,
Indiana, and Mendocino County be available?

DR. GRANNIS: Well the record locator service is slated to run for a year and
sometime following that, I don’t want to put hard numbers on that because I’m
not sure what we’re going to be seeing. I think you’re likely to see before
that an analysis of within the Indiana Network for Patient Care an analysis of
some algorithm comparisons within our dataset prior to seeing some —

MR. BLAIR: So what you’re doing is you’re comparing the algorithms, you
don’t really have populations that are traveling between the three where you
could wind up seeing if you’re able to match a patient from one geographic area
within another.

DR. GRANNIS: Ultimately that is the goal and again, we know, we’ve done some
studies that show about five percent of visits to our emergency department are
from outside of the country, in fact if you look at a three year series of
emergency department visits the entire United States lights up if you look at
the home addresses of patients seen in Indianapolis. So while the numbers are
small in aggregate across the country they’re very large, the overlap, so being
able to begin building that ability to match up patient data, so there’s going
to be a lot of simulated patient data where we’ll load data from one system
into another and perform some permutations on that data to look at the

MR. BLAIR: You said the report is going to be ready at the end of the year,
you mean the end of this year or the end of next year?

DR. GRANNIS: The ONC project, I’m sorry, I’m confusing a couple of things
here, there’s too much going on right now. The Markle project for the record
locator service is slated to have some software available in January of ’06 for
folks to exchange messages, that doesn’t include the actual matching algorithm,
rather it includes software to create a locator service that can aggregate
messages that are returned from different, from the various organizations, but
the actual matching algorithm is not in there. So there may be some opportunity
in the ONC project that Regenstrief is involved with again with Massachusetts
and Care Sciences to work on reference implementation for regional data

MR. BLAIR: Okay, and the comparison of the algorithms, when will that be —

DR. GRANNIS: Well, I fear, I’ll say what I’m going to say and I fear
sounding biased but as a medical informatics researcher I am an academic and I
do seek grant funding, July of this year I’m putting in a career award for
record linkage so give me five years and hopefully you’ll have some answers.

MR. REYNOLDS: Okay, Shaun, thank you, very well put together, excellent
input for us, we thank you very much.

All right, we’ll start again at 10:55, due to the length of the discussion
this time I’m sure we’ll have similar with our next speaker, so 10:55, 15
minutes. And please turn off your microphones so there’s not a lot of
background noise for people on the internet.

[Brief break.]

MR. REYNOLDS: Okay, Judy, do you want to go ahead and introduce Jerry?

DR. WARREN: One of the other perspectives that we wanted to hear on this
testimony was ways that groups other then health care facilities were actually
identifying patients and so one of the areas we were looking at is what are
insurance companies doing in trying to link their subscribers with their data.
And so with that we’ve invited Jerry Bradshaw to come in from the Blue
Cross/Blue Shield Arkansas to share some of the information that they’ve done
there. And I have to say that I’ve, in taking a look at how to do this I keep
hearing more and more references to the work that Arkansas is doing in this
area and so I commend you for that and with that we will turn it over to give
you the most amount of time to give your testimony. Mr. Bradshaw?

MR. REYNOLDS: Do we have copies of —

DR. WARREN: Yes, in your black folder there are copies of his slides.

Agenda Item: Matching Patients to Their Records – Mr.

MR. BRADSHAW: Well good morning, my name is Jerry Bradshaw, I am executive
director of health information networks for Arkansas Blue Cross/Blue Shield.
I’m speaking to you today on behalf of the Blue Cross/Blue Shield Association.
Blue Cross/Blue Shield Association is made up of 38 independent locally
operated Blue Cross companies that collectively insure 93 million, or about one
in every three Americans.

On behalf of the Blue Cross and Blue Shield Association I want to thank you
for the opportunity to share our thoughts concerning patient identification and
matching. I was joking with Michael Decarlo earlier that the previous
presenter, I should have tag teamed because we actually came to a lot of the
same conclusions, so maybe I’ll just say what he said and we could all save
some time. But we did take a few different directions on some of the things
which might be interesting to you.

As I’m sure you’re painfully aware there are at least two schools of thought
regarding a National Patient Identifier, there are some folks who believe that
a National Health Information Network is simply not possible without some form
of a uniform identifier and then there are others who have concerns that such
an identifier would compromise patient privacy. While privacy may be more
perceived then real experience has taught us that developing such a number,
such an identification system, for example with a national provider identifier,
is fairly costly and will take a very long time to implement. Accordingly it
makes sense that we look at the alternatives at least in the short term.

My testimony today will be based on a little over six years experience using
a universal identifier and will touch on the following points. First of all to
provide you with a frame of reference about the system that this information
was drawn from, I will give you some background on the Advanced Health
Information Network, which I will refer to AHIN, the acronym for Advanced
Health Information Network. To give you an idea of how the system works we’ll
look a little bit at the AHIN system architecture, then we’ll look at how we
actually do patient matching and identification on AHIN, some of our experience
with algorithms and identifiers, and finally some summary and conclusions.

In 1995 an Arkansas based consortium undertook the creation of one of the
nation’s first interoperable networks to include information in the clinical
arena, the administrative arena and the financial arena. The vision of this
network was basically to empower health care professionals with information at
the point of service. The Advanced Health Information Network was built by a
partnership of leading Arkansas health care providers, health care systems,
Arkansas Blue Cross/Blue Shield, and the IBM Corporation.

The system was built according to several architectural guiding principles.
The ultimate purpose of building this network was to benefit patients on the
clinical side, members on the administrative and financial side of it.
Accordingly the patient and/or member was built as the epicenter of the
architecture since all of the transactions revolved around him or her. We
attempted to think globally of the health care industry, not organizationally,
we were looking for solutions to problems that were universal to the health
care system as opposed to problems that exist in specific organizations with
whom we were partnering.

Thirdly every organization that we partnered with had big dollar investments
in existing computer systems so what we were looking to do was to leverage
those existing systems as opposed to try to build something to replace them, so
essentially what we were attempting to do was to build an integration engine to
put all of this stuff together. We wanted to provide alternative options for
integration where possible to avoid building solutions that were only geared to
work with one triage patient record in the case of hospitals or one EMR in the
case of physician’s offices. We wanted to really kind of go where no man has
gone before, we wanted to really think outside the box and not be bound by what
had been done in the past. And we wanted to create a virtual secure view of a
patient’s longitudinal record with the use of a master patient index, and we’ll
talk about that more in detail in a little bit.

We wanted to build on industry standards, primarily ANSI on the
administrative/financial side and HL7 on the clinical side, and we wanted to
create open systems which weren’t tied to any one tool or vendor. And finally
while we designed this for the State of Arkansas we attempted to architect it
for portability to anywhere.

Just as a matter of note, in addition to creating AHIN Arkansas Blue
Cross/Blue Shield has been working with physicians since 1996 to implement
electronic health records and we did that in order to get systems in place to
work in an interoperable way with the AHIN. To date we have implemented over
1,000 licenses in the State of Arkansas of electronic health records primarily
Logician(?) which presently is GE centricity system. And these implementations
have been in larger clinics by and large.

Recently however, as a matter of fact in the last six months, we’ve
completed a successful pilot with a wireless EHR system that is at a price
point that is consistent with the NHIN strategic vision.

The AHIN is built on a distributed architecture model. Some people call this
the Federated Google Model. The foundation of the whole system is a master
patient index. The master patient index, which is represented here, contains
information about a number of different types of data. Now the thing to bear in
mind is that it’s not the data itself, that it’s a description of the data that
includes pointers to where the data actually resides. It also contains a
universal person identifier and I will refer to this as a universal patient
identifier sometimes and a universal person identifier at other times, it’s the
same thing, I just bounce back and forth, it’s a UPI.

It may be instructive to think of the MPI as a great big old spreadsheet in
Lotus or Excel, it has a dozen or so columns and millions of rows. And each one
of those rows is information about a patient encounter and one of the columns
in those dozen or so rows is the UPI and the UPI is what we use to link all the
data about that specific patient together.

Every technical presentation has to have at least one slide that is
marginally decipherable and this is mine. This is a view of the AHIN from
100,000 feet level. At the center of this system is a hub, major servers, for
example hospitals, are connected to this hub by way of dedicated high speed
lines. The hub is not where the data resides but it is where the MPI resides,
so a query from a physician system would go to the hub, actually if you peel
this down a little further physicians also can connect to their hospital
systems and get into that high speed line to get to the hub. Nevertheless if a
physician makes a query against the system he’ll hit the hub one way or the

This is an actual search light, it’s a screen kept off the system. The
search criteria is identified there and it’s the fairly standard identifiers
including Social Security Number, last name, first name, date of birth and so
forth. There are also fuzzy searches which essentially assigns like searches.
Once the search criteria is entered and the only thing that is really required
is either Social Security Number or last name. Now you really wouldn’t want to
search for Smith and very few people do because you’d get back 100,000 hits,
usually there’s a first name or at least a date of birth in the search.

But once you invoke the search you get back everybody that meets the
criteria that you entered. Naturally the more criteria you enter into the
search the less number of names you get back. Interestingly if you put my
Social Security Number in there you only get one hit. The user then is allowed
based on the additional information that is presented to select one of those
individuals and when the individual is selected the user is taken to a screen
with some tabbed options. The tabbed options represent types of data. In the
background the system now knows what this UPI is, there’s the UPI. The system
knows what that UPI is and it is going through the MPI and figuring out every
place where that data exists for that individual and where it exists. If the
user then wants to see the clinical chart and he has the privileges to do so
clicks on the clinical chart, the system goes out to the various servers that
that data is sitting on, pulls it into a virtual record, presents it to the

The discussion of architecture is probably not complete without talking
about security just a little bit, I won’t spend a lot of time on this. But all
the major servers are connected together into a private high speed network,
they authenticate each other with certificate authority so that they know who
they’re talking to. The user, initially in this system we used exclusively a
VBN using the IPSec tunneling protocol which is the most secure form of IPSec.
What we found was that that was really highly maintenance intensive and we have
since migrated to secure socket layers because essentially there is no
maintenance on SSL, it’s all in the browser.

When a user logs on to this system the user is associated with a specific
organization based on their user ID and password. Access to data is role based,
the roles are assigned on using the need to know principle, and access to all
confidential data accept eligibility is limited to that organization’s
patients, in other words one clinic can’t see another clinic’s claims, nor can
one physician see another physician’s medical records except in two situations,
there are two ways that that can occur. There’s actually three but I won’t talk
about the third one because it’s never used.

We gave physicians the ability to certify patient authorization for access
to clinical data on new patients. What we actually did is if a physician tried
to access a patient where they had no records in there about that patient you
got a pop up that said you can’t do this but if you’ll get the patient to sign
this patient authorization and certify that you have the patient’s approval to
view this record then we’ll give it to you.

The second occasion where access is allowed where the physician is not a
part of that record is in emergency situations. We gave emergency providers a
kind of break the glass capability to go in and view all clinical data
regardless of their association of that patient in the past.

Neither of these exceptions by the way apply to the administrative side, it
only applies to the clinical side. In other words if you break the glass you
don’t get to see in some other doc’s claims, that’s not part of the deal.

Finally registration documents in participating organizations were changed
to an opt out and essentially we changed the registration documents to say sign
here if you don’t want your information on the system. Also because of various
mandates we found that we could not openly share clinical data regarding some
conditions, example being mental health and sexually transmitted diseases.

Current status of the system. The system moved from concept to beta in 1998
and remained in operation for two years in two regions of the state. In 2001
while the system worked very effectively the provider side decided that they
didn’t have the funds to support the system anymore and we shut the clinical
system down. The administrative and financial part of the system is still
operated today, it works basically just like the clinical system in terms of
patient identification and matching, and at this point it is in virtually every
physician’s office and hospital in the State of Arkansas. And as you can see
Arkansas has about 9,000, 9500 physicians, so we’re in nearly all of them.

Getting into specifically how the system does patient matching, patient
identification and matching is really and integral part of what we do. If you
look at the process from a data flow perspective, when a patient encounter
occurs that result from the medical encounter is stored on a local server. That
server may be from a hospital where the server is integrated with their
clinical patient record, or it may be a physician’s office EHR, really doesn’t
matter. Once the server gets the new record it sends a copy of the general
information about that encounter to the hub server. The hub server then takes
and extracts the identifying information from the record, the record is sent in
HL7 messaging standard, and then it executes a matching algorithm. The
algorithm that is presented on the slide is not actually the algorithm that we
use, this one is a lot simpler and easier to explain and I thought I would use
that one simply because it’s simplistic.

In actual practice what would happen, one more thing, let me give you a
couple of definitions of terms. I already mentioned UPI, that is a number that
is generated by the system and it’s a patient identifier that links all of a
particular individual’s data together. The next one, two, three, four are self
explanatory, external ID, external ID can be a number of different things, an
external ID can be a medical record from a hospital, a medical record number.
It can be a patient IT from a physician’s office, it really doesn’t matter,
could be a member number from an insurance company, really doesn’t matter as
long as that individual always sends that type of data or that type of record.

Then fuzzy match, in some cases like in first names if it sounds like a
match and date of birth takes into account transpositions, for example 1/1/2040
as opposed to 1/1/2004. In other cases we have also used this as a plus or
minus six months, date of birth plus or minus six months. And that’s the
general definitions. In actual practice the system takes the identifying data
off of the incoming transaction and it compares that data to each one of these
identifiers. If a match is made on a specific identifier the score you see to
the right is added to the matching score. If a match is not made the number in
the non-match column is subtracted from the matching score. Once every
identifier has been evaluated when you get to the bottom if 100 points have
been scored then the system declares a match, it adds the UPI to the record and
it stores that record in the MPI. If 100 points are not reached the system
declares a non-match, assigns a new UPI to that record and stores it in MPI.

We have found that a static ID really is indispensable in identifying
patients. Unfortunately ID identification changes, names, last names change
because of marriage and divorce, first names fall victim to nicknames or
shortening like Jan for Janice or Tom for Thomas, just all kinds of things can
happen to data like that. To illustrate, this is a fictitious individual named
Linda Jones and Ms. Jones is widowed, she has lived in Mawmaw(?), Arkansas for
ten years and she’s got a lot of data on the system. Ms. Jones gets remarried
and she moves to her new husband’s home. After a year she sees a new physician
and all of a sudden we’ve got another record here and we’ve got to match that

What we find however is the only thing that we’ve got left out of these
common identifiers is a first name, a sex, and her date of birth. Now
interestingly although this is a fictitious individual and I promise you I
pulled it straight out of the air, I pulled the name and the date of birth out
of the air, I went to the AHIN database to figure out what kind of problem I
would have in matching this individual. The AHIN database contains about 1.5
million individuals. Of the 1.5 million individuals 12,000 of them have a first
name of Linda. Of that 12,000 seven have a date of birth of 10/19/1948. Bottom
line is no way can you match that based on these identifiers, you’ve got to
have something in addition to that. In AHIN’s case we use Social Security
Number, we also have that external ID that proves very useful at times.

Another factor, well one more comment about that, what we found is that
absent some kind of static identifier the probabilistic algorithm is much less
likely to produce a match. In fact if you take the static identifier, including
the Social Security Number that we use and the external ID that we use, our
match rate on records is 99.5 percent. Absent either one of those the match
rate falls down to below 90 percent so it’s a big difference.

Another factor to consider in whether some kind of static identifier is
needed or not is response time. I will never forget when we beta’d the system,
we put it in one of the beta sites was a hospital emergency room, and we put
the system in, left it in for about six weeks and then went back and sat down
with physicians to find out what we had done wrong, what we had done right,
what they’d like to see us do differently. One of the first comments was it’s
too slow and of course my response was well how long response time problems are
you seeing. And the physician’s response was well it takes 30 to 45 seconds to
get a record back. And I asked him, I said okay, how do you normally get
records, well I tell a nurse, the nurse calls the medical records department,
the medical records department searches for it, and 30 minutes later he gets
the record. The problem is sitting in front of a screen for 30 seconds is not
acceptable and so response time is all relative but that’s a real life story of
something that we ran into when we beta’d the system.

After doing a lot of research on the front end of this system about response
time, using a UPI as a key locator as opposed to searching the database with
individual identifiers like name we found that UPI was much, much, much faster
and response time is a factor that you need to think about.

In summary I’ve offered the following conclusions, the first one being that
matching algorithms are really very good but they’re vulnerable and it really
kind of boils down what is the acceptable in terms of a false negative rate.

Secondly, a static ID would indeed be very useful in many cases especially
those where data has changed.

Next, a National Patient Identifier provides advantages but is a long term
solution at best because of implementation issues, it’s going to take ten years
to put it in and I don’t believe that anybody wants to wait ten years to start
a National Health Information Network.

I would say that at this point the Blue Cross Association has not suggested
that CMS take up the issue of a National Patient Identifier, that really is
kind of at this point in time in limbo.

But finally a probabilistic algorithm for patient matching and
identification is most likely the most viable solution given the urgency
associated with NHIN.

That concludes my prepared remarks and I’d be happy to respond to any
questions you may have.

MR. REYNOLDS: Okay, Jerry, thank you very much. Michael, I am going to give
you the ultimate opportunity. Or better yet I will start with someone else and
as soon as you put your hand up you will be next.

DR. FITZMAURICE: I’m very happy to go. First of all, thank you Jerry for
great testimony on what’s happening in Arkansas and how you’re pulling this
together, it’s good to hear a lot of the commonalities across people who are
doing the same thing which is a very serious job of matching patients and
matching patient information.

One little picky point on the slide number four, you say built upon industry
standards ANSI and HL7, ANSI is not a standards developing organization, it
accredits standard developing organizations —



MR. BRADSHAW: I just didn’t put the X in —

DR. FITZMAURICE: It’s like calling St. Mary’s Hospital JCHO St. Mary’s.

Have you done any studying of false positives and false negatives, that is
the probabilities of each and depending upon how you manipulate your algorithm
or the variables you include?

MR. BRADSHAW: Yes. I can’t quote you statistics, what I can tell you is we
began with a base algorithm and then began our tuning process and the tuning
process was to minimize false positives. Of course when you minimize false
positives you are likely to increase false negatives and so there’s a balance
that has to be struck and it’s, I call it lost and found by trial and error, we
went through the process of tuning the algorithm until we hit a happy medium.

DR. FITZMAURICE: So in your quoting of 99.5 percent matching does that mean
that you’re willing to take a .005 or five out of a 1,000 chance of matching
people incorrectly?

MR. BRADSHAW: The errors by and large are false negatives, it’s where we
create a duplicate record when really a match exists on the database. False
negatives are bad enough but false positives are far worse.

DR. FITZMAURICE: Yes. That’s one thing I learned, well if I thought about it
I would have learned it but that’s one thing I learned today and glad to see it
verified on both sides.

On the example for Linda, Linda Jones where there were seven other Linda’s
that came up that had the same birth date, you didn’t use Social Security
Number in there —


DR. FITZMAURICE: But for most patients that’s one of your key variables.

MR. BRADSHAW: In our algorithm we use Social Security, the reason I did it
the way I did is because there are a lot of folks that don’t advocate using
Social Security at all and those are fairly typical identifiers for matching
algorithms. If you add Social Security in there you’re right, you get a direct

DR. FITZMAURICE: When you look at the variables in your matching algorithms
is Social Security Number still the most powerful number at the margin being
able to identify a patient?

MR. BRADSHAW: On UPI probably is a little better then Social Security Number
but they’re very close to each other.

DR. FITZMAURICE: Because UPI incorporates information from the other.

MR. BRADSHAW: Yes it does.

DR. FITZMAURICE: All right, thank you very much. Thank you, Harry.

MR. REYNOLDS: Stan, did you have anything?

DR. HUFF: This is a little tangential but what the heck. So as you talked
about the opportunity for people to opt in or opt out, that sort of opting in
our out of this network and the sharing there of information, a different
question which is sort of interesting is can you see it being, if a person
wanted at a given facility not to connect their records even within the
facility would business work?

MR. BRADSHAW: Yes, but not as well, you wouldn’t have a complete record is
what it would amount to.

DR. HUFF: I mean it’s been proposed that some people essentially in getting
a second opinion don’t want the second physician to have access to their
existing information and I’m trying to think whether that would work, you know
in my own system I’m struggling to see how we could provide care and provide
that opportunity for them to opt out of connecting records even within a given
enterprise. Have any thoughts about that?

MR. BRADSHAW: Well of course our opt out was on an organization by
organization basis and you are very correct in stating that an individual for
this particular clinic could say no, I don’t want that record to be a part, and
essentially we don’t have a solution on how to deal with that, it would go back
to the patient is the epicenter of this thing, if the patient don’t want the
record shared then that should probably be their right.

DR. HUFF: Thank you.


DR. STEINDEL: Thank you, it was a very interesting talk. I loved the way you
gave your scoring algorithm and I liked the balance of plus and minus, I
thought that was a very interesting approach and I appreciate you sharing it.

The question that I have is in Shaun’s talk on the Regenstrief Institute
they’re obviously using something that’s essentially equivalent to your UPI
only they’re doing it internally and you’re doing it externally. Do you have
any comments to make on the utility of it being externally available versus
internally available? It obviously always has to exist internally.

MR. BRADSHAW: Yes. Well, frankly one of the reasons that we built, wasn’t
the primary reason but one of the reasons we built the UPI in the first place
was back in 1995 we anticipated that sometime in the future there probably
would be a National Patient Identifier and the national Patient Identifier
would be the UPI, we’ll convert it to the UPI. So that’s really one of the
reasons we put it there in the first place.

As far as it being externally available it is probably much a dead lock when
we get a UPI, it’s a match, unless somebody fat fingers it. The comment about
the check digit is one thing that we did not do, we should have done that. If
we’re going to do a National Patient Identifier it needs to have a check digit,
that’s just a given.

DR. STEINDEL: Just to extend that thought a little bit, when we were talking
about these regional systems like yours that are developing around the UPI and
like the Regenstrief which is developing around a hidden equivalent, do you
foresee any type of nationwide architecture that might contain a master patient
index of these UPI equivalents?

MR. BRADSHAW: The way I have always thought about that, and we’ve not built
it like this but in considering, if I were going to build a nationwide system
how would I do that, and the thing that comes immediately to mind is Napster.
Essentially that is the model that most folks are thinking about I would think
in how you make this thing work. Now you may have 40 different RHIOs and all of
them may have a UPI that’s all their own but you could query that, they could
use their own algorithm to find that individual and give you back that
information just like you were searching for a song on Napster. And that’s kind
of my thoughts about how it could be done.

DR. STEINDEL: Thank you.

MR. REYNOLDS: Jeffrey.

MR. BLAIR: Thank you, Jerry. Did you say you have a million and a half

MR. BRADSHAW: Yes, sir.

MR. BLAIR: Okay, and it’s pretty much throughout the State of Arkansas —

MR. BRADSHAW: That’s correct, some into Northern Texas, actually one of the
facilities that we participated with is about 200 yards on the inside of Texas.

MR. BLAIR: Okay. I have two questions. One is I’m assuming that a part of
the design that you put in place was so that if you have a patient in Little
Rock who lives in Little Rock and they travel someplace else in Arkansas and
they need medical care that you should be able to find them just as readily as
in Little Rock. So you probably had some experience, has that turned out to be
the case and if it hasn’t turned out to be exactly the case what kind of
impediments or problems have you wound up with when a patient goes to another
geographic location?

MR. BRADSHAW: We essentially found that it worked amazingly well. As long as
we got good data on the incoming query, good names, first name, last name,
Social Security Number, invariably we could make the match.

MR. BLAIR: And the key piece, the thing that is the, that’s most critical to
the match apparently is the Social Security Number being available?

MR. BRADSHAW: That is a key static field, yes.

MR. BLAIR: The other question that I had isn’t directly related to the
patient record but it’s to the broader picture of what you were trying to
achieve, you mentioned that you were able to get a lot of health care providers
to lock in and implement their electric health record systems using the same
vendor and I would have assumed that if a state or region or the nation had a
single vendor and were able to share health care information that that would
have been great value and yet you indicated that the funding after a couple of
years to support the exchange of clinical information from electronic health
record systems is no longer in place and therefore that capability has been
discontinued. Is that correct?

MR. BRADSHAW: It’s partially correct, we still operate the interfaces from
hospitals to the electronic medical record so that we can populate lab results
and dictation and so forth to the physician’s EHR, but in terms of the clinical
data repository that is no longer in operation.

MR. BLAIR: Okay. On a nationwide basis since you’ve already tried to do what
other regions and the nation would like to be able to do, and yet for some
reason the value wasn’t perceived as so compelling that the funding continued,
what advice or guidance do you have to the rest of us so that when we do try to
set up these networks to share financial, administrative and clinical
information from patient records that we maintain the support of the population
and the health care providers and the payers to continue such a system?

MR. BRADSHAW: I think first of all that one of the problems that we
encountered is that we had actually two partners not counting IBM, one of the
situations that we fell victim to is we didn’t have enough partners, we
actually had two health systems as partners, the problem is that the investment
necessary to build a system like this and to keep it running is pretty darn
high but you can run a lot of traffic across the fixed asset once you get it
built. If we had had 20 health systems instead of two the answer probably would
have been different but one of the things that we thought at the time, is
remember this is back in the middle ‘90s and there wasn’t a lot of people
thinking about this back them, and it was difficult to get somebody to commit a
million or two dollars to a venture like this back then especially. Today I
think it’s a little bit different story and I think that the key to it is
getting a lot of people involved so that you can spread that cost.

MR. BLAIR: Thank you very much.

MR. REYNOLDS: While we continue asking questions Shaun, would you mind
stepping to the table so that as we ask these since we heard two distinct
conversations, if you would have a subsequent comment to anything that comes up
I think that’d be helpful.

The question I had quickly, Jerry, you mentioned on slide six under health
administration global registration, so is it set up so that there is literally
global, the same registration in every institution? You list a number of things
under the hub.

MR. BRADSHAW: No. Essentially what that feature allowed us to do was if a
hospital wanted to use that instead of their ADT system they could actually
register the patient in AHIN and drive that information into their ADT system.
And the value of doing that is we had the administrative data to populate their
ADT record, we had health insurance information for multiple payers, whereas
when we built this as an all payer system by the we, this was not a Blue Cross,
solely Blue Cross system, and so we could rive that into there and it actually
was faster then using their own system. And the data was accurate, when they
filed a claim they filed it with accurate data.

MR. REYNOLDS: The second thing, I noticed on slide nine you showed your UPI,
big number, that’s a really big number.

MR. BRADSHAW: That’s one of the reasons that we need a check digit on that
number because if there is a National Patient Identifier it’s going to have to
be a big number and it’d be prone to fat fingering. By the way for those of you
who haven’t counted it it’s actually a 16 digit number, the first four numbers,
it’s 24 digits, the first four numbers identify what location the data came
from, the second four digits indicates what server it came from and then 16
digits is the UPI.

DR. GRANNIS: Just a quick comment on that. I think identifiers need check
digits and an identifier should not have any additional meaning in them because
what happens is people start wanting to use that for the meaning that’s
inherent in them besides the uniqueness, so I would just —

MR. REYNOLDS: That was part of the reason for the question, because anytime
you start imbedding knowledge in these systems, or these numbers, it starts to
lose itself as it spreads out.

MR. BRADSHAW: Technically you could drop the first eight digits and nobody
would ever know.


DR. WARREN: I had a couple questions for you, Jerry, I noticed that you
started out very small and then you’re almost all of Arkansas now, and I was
wondering what role did having this unique patient identifier play in making
that, making it possible to have all of Arkansas in this health information

MR. BRADSHAW: I never really thought about it in those terms.

DR. WARREN: I mean like was it a critical thing that you could actually
identify these patients?

MR. BRADSHAW: It was critical to the identification of patients and if we
couldn’t identify the patients nobody is going to use the system so I guess
you’d have to say it was really, really important to it but it’s sort of in an
obtuse way. The reason it became so popular is because it is very, very feature
rich and at this point most physicians in the State of Arkansas, at least their
staffs, their business office staffs, wouldn’t dream of doing business without

DR. WARREN: Then the other thing I wanted to ask, and this kind of goes back
to not so much around the UPI but more to e-prescribing, you said that you were
giving out licenses like to Logician and now some other software, is the
network actually encouraging e-prescribing and kind of how is that working?

MR. BRADSHAW: Let me correct you on one assumption, we didn’t give away
licenses, what we did was we subsidized licenses, we essentially pay for about
50 percent of the cost of them, our philosophy being that everybody needed to
have skin that game. Now as to the e-prescribing the initial implementations,
the 1,000 licenses of Logician, really didn’t have e-prescribing per se. The
system that we are now getting ready to roll out, the wireless system that I
mentioned, does have e-prescribing and it does work very well and again we’re
subsidizing that, we’re not paying for and we’re not giving it away but it is a
part of the total vision.

DR. WARREN: And then just one final question about the formation about your
information network, as you brought in new partners what kinds of trust issues
did you encounter and how did you solve those about people being able to come
in and wanting the data but also willing to share their own in the network?

MR. BRADSHAW: That’s always an issue and that was a tough nut to crack.
Essentially all the various participants in this network had a standard
contract that we signed that basically had stipulations about what was required
in terms of sharing data and all of that kind of business. I have to tell you
that there were some pretty tough negotiations with some folks because that is
a major, very emotional issue in some cases. But essentially we had a baseline
of requirements and that was one of the requirements, you share your data or
we’re just really not going to share the rest of it with you.

MR. REYNOLDS: Shaun, how about your advice?

DR. GRANNIS: Regenstrief and Clem started back in 1972 working on
integrating all these hospitals together but I think the key strategy to
building trust was incrementalism and being very clear with the value and what
I mean by that is the INPC started by sharing emergency department data so Clem
and Mark Overage went out to the hospitals and said does it make sense to you
that emergency department physicians should have a clearer picture of the
patient when they come in and hospitals said yeah, yeah it does, then let’s
work through how we’re going to secure this data, make it safe and readily
accessible. So there’s value along with that trust to get people at the table
and then incrementally you move forward.

MR. REYNOLDS: Judy, you were finished, Wanda.

MS. GOVAN-JENKINS: In looking at slide seven and eight you commented, and it
wasn’t clear to me, you said you did not want to use the Social Security Number
but when you did the Jerry Bradshaw search only four came up, what if 100 came
up, how do you narrow it down to —

MR. BRADSHAW: You have to increase data. In fact we advocate using Social
Security Number, I didn’t in this example but we advocate using it and the only
way to cut down the number of hits is to increase the amount of valid data in
the search.

MS. GOVAN-JENKINS: And then another question, under slide ten, the break the
glass, does the clinical data include the site data for that patient?

MR. BRADSHAW: Mandates in the State of Arkansas and more so in Texas are
against open access to that data as well as sexually transmitted diseases, so
no, that was suppressed.

MR. REYNOLDS: Michael?

DR. FITZMAURICE: Two fairly quick questions I think, one, you mentioned that
you wish you’d added a check digit, is there a problem with adding a check
digit today? Is it the cost, is it you have a fixed width formatting, storage

MR. BRADSHAW: It really isn’t, we could easily add one today we just haven’t
done it. The UPI is not used that much as an identifier except internally at
this point, were we to ever go external with it yeah, it’d be really the right
thing to do.

DR. GRANNIS: There certainly are some real retooling issues, I think back in
the Clinton health care reform era they looked at some estimates for adding a
check digit to Social Security Number and you were in the order of billions of
dollars to do so. We can’t just today say let’s throw a check digit on Social
Security Number, you need to reissue those and people need to start adding them
to their systems and building that into their systems. So there’s a fair amount
of retooling that needs to be done both operationally and administratively to
do something like that and so get it in there from the get go please.

DR. FITZMAURICE: If I remember those because those because if there were 250
million people that need these, actually 300 million people now and it costs
maybe $4.00 dollars to send out a letter with a number in it, you’re over a
billion dollars right there.

The second question, the quality of the data linked, now maybe you don’t
have as much problem because you’re dealing with administrative data, Jerry,
but when records are masked, I’m pretty sure we’ve got a record and maybe they
are masked, but some clinical variables are extremely important for clinical
decisions, you want to be certain about blood type for example, and if you had
a clinical decision support algorithm that went through and said let me pull
out blood type, does blood type from this source equal blood type from this
source, if not we want to send an alert to the physician. I admit that’s
probably not a master patient index problem but yet it might be something
important to know to double check the blood type.

MR. BRADSHAW: I understand exactly where you’re going and you’re right, it’d
be the right thing to do. We didn’t, we never built that into the system,
however, one of the things that we did build into the systems and one of the
big cost drivers in the system was normalization of data, and of course you
have to normalize both the internal knowledge and gradient scales and all that
kind of thing and I would hope that if you had two different blood types you’d
pick that up as the result of doing that.

DR. GRANNIS: I guess just trying to think about a clinical picture, if I’m
typing and crossing and have 20 minutes or an hour and a half I’ll probably
rerun the test right there rather then rely, so if the patient’s bleeding out
and I need to do something quick and I have this information available linked I
might say well, this is the best I have let’s make the decision to move forward
and I think I’d be justified in saying well since Hospital Z says it’s this
type and otherwise the patient is going, yeah, I think we need to sort of
weight the risks and benefits —

DR. FITZMAURICE: But if Hospital X says O and Hospital Z says AB —

DR. GRANNIS: And I have time I’m going to rerun the test —

DR. FITZMAURICE: And you don’t have the time —

DR. GRANNIS: That’s problematic.

MR. REYNOLDS: Okay, Karen and then Steve.

MS. TRUDEL: I just want to make sure I’m understanding something, it sounds
as though both of you have said that while a lot of people think that it’s not
appropriate to use the Social Security Number for a lot of these things at the
end of the day it really is critical to make sure that you match the right
person to the right record. You have to nod louder —

MR. BRADSHAW: Yes, I agree. It is for us, as I mentioned using Social
Security Number as a static identifier we get about a 99.5 percent hit rate,
without it we fall below 90.

MS. TRUDEL: So I guess the point I’m making is this is just a reality of the

DR. GRANNIS: We use what data is consistently and readily available in the
HL7 messages and that is name, date of birth, gender and 70 percent of the time
Social Security Number.

MR. REYNOLDS: But I think a key point is as we’ve seen in some of the
legislation, as the identity theft continues to be more of an issue there
currently it has not restricted Social Security Number inside institutions,
however there are, I know in some states there is a push to eliminate the use
of Social Security Number even in normal business practice which would then go
dead against what this discussion was, because if that is kept and basically
removed as a possible identifier even inside standard business practice it
shuts it off basically. I mean I totally agree with what you said but I think,
I mean that’s where this whole thing becomes an overall discussion, not just
pieces and parts, because any move one way or the other can shut down a path
that’s somebody has been going on.

All right, we’ve got Steve and then Jeffrey.

DR. STEINDEL: I think Jeff might have a question that’s more to the point
but I’m going to go a little bit off the point to you, Jerry, this has to do
with the questions Jeff was asking of Shaun concerning the migrant worker
population and do you have any experience with that in the UPI?

MR. BRADSHAW: I really don’t, sorry.


MR. BLAIR: Shaun, if I heard you correctly a moment ago you indicated that
70 percent of the matches from patients to their records in your experience
have involved the use of the Social Security Number?

DR. GRANNIS: No, what I said was 70 percent of the patient records have
Social Security Number with them —

MR. BLAIR: And you use whatever is available, how would that be different
then indicating that, you’re saying 70 percent of the time when you have a
match part of that match is the Social Security Number?

DR. GRANNIS: No, that’s not what I’m saying, I’m saying that 70 percent of
the time there is a number available for linkage, I can’t tell you the number
of positive links that we get that involve Social Security Number, if we don’t
get a match on Social Security Number we will look at full name and date of
birth subsequently.

MR. BLAIR: All right, maybe you’ve even answered my question but let me
rephrase it and maybe you could either directly or indirectly see if you could
help us with this. Since you’ve had experience trying to match records where
sometimes the Social Security Number is available and sometimes it’s not are
you able to help us understand how important or critical or essential having
the Social Security Number is to successful matches?

DR. GRANNIS: Yes, in my presentation I showed if you constrain patients to
within Indiana and use name and full date of birth we don’t see any false
positive hits. If we expand the search space, if we search for patients outside
of Indiana, so if I take my patients who are in Indiana hospitals and look for
them in the Death Master File nationally using name, date of birth and gender
we get false positive hits. So what I’m saying, what that information tells us
is we need additional qualifying information and Social Security Number could
be one of those identifiers, or a National Patient Identifier could be, or we
could throw zip code in there and decrease our false positive rate but also
decrease our true positive rate as well. So these are the tradeoffs and
investigations and studies that need to be done I think to better understand.

MR. BLAIR: Okay, last question. You’ve made a tremendous amount of progress
with for the most part probabilistic approach, maybe more progress then many of
us had thought could be made five years ago with a probabilistic approach. But
in the areas that you serve, in this case Boston and Indiana and Mendocino
County, that’s still a subsection nationwide and if we get to areas where we
either have patients that fraudulently are using Social Security Numbers to get
health care or are undocumented in some way or migratory in some way, what do
you think is the low hanging fruit that could be used to supplement the patient
identification process that you’ve developed to at least begin to help us
identify patients that I’ve just described?

DR. GRANNIS: This is a challenging question and patients are, we have
patients and we do have, I’ll rephrase my answer from your previous question,
we do see migrant workers in the Indianapolis area, we also see people who
falsely misrepresent their Social Security Number, I don’t know if those are
one and the same population, I’m not saying they are, so we do deal with that
but in this case, and we talked a little bit about this in the break, we
receive at Regenstrief whatever data the hospitals are willing to send to us,
we’re not in a position where we can constrain but the question of how can you
validate or authenticate who a patient says they really are, the patient
authentication question has application to patient health records and other
issues and I don’t know of a good answer at this point for how do you
authenticate the patient, I’m not sure what that answer is.

MR. BLAIR: Okay, thank you.

MR. REYNOLDS: Okay, one last question. So both of you kind of come down to a
number, right? As to how you identify the person.

DR. GRANNIS: The number will help and I think there are advantages. Can we
do it without the number? I think so —

MR. REYNOLDS: No, I know, but what I’m saying is both of you come down to a
number now, kind of a unique identifier, is that right? Currently. You do and
you don’t —

DR. GRANNIS: No, and I wanted to qualify Dr. Steindel’s question from
earlier, even though we have this global identifier in the system it is only
for indexing purposes, it’s not used in matching algorithms at all.

MR. REYNOLDS: But I guess the question then becomes would you ever put it
on, would you ever give the person a card? Or would you add it, you’ve got
multiple payers in Arkansas, would you ever put it on the card or would you do
the same thing? In other words you have distilled it down to a unique

MR. BRADSHAW: I see nothing that would be wrong with doing that so I
wouldn’t have any objection to it.

MR. REYNOLDS: I’m not recommending it, I’m asking, because Indiana, on the
one hand we talk about a national identifier and then you distill it down and
if you’ve seen a patient and you’ve gone through all the stuff to figure out
who they are and what they are and so on, even if they later change their name,
you gave them an identifier, but I didn’t hear either one of you say okay now
we got it, let’s make that be what we use.

DR. GRANNIS: We explicitly don’t want to use it because it can change over
time again because this transitive issue of if A equals B, if B equals C, and C
equals D, when you go to a national system there’s too much potential with
these grouping, I don’t want to get into the technical aspects but patients
identity may change over time, our goal is to link what people send us, not to
be the final arbiter of that information.

MR. BLAIR: Could I jump in here? Because I think the point that Harry is
trying to make if I understand it correctly is that even if in the end you wind
up coming up with a number that you generated and it’s only for indexing
purposes, if somebody completely removed the Social Security Number that would
do tremendous damage to your ability to match patients in both cases. Is that
not correct?

MR. BRADSHAW: That is correct.


MR. BLAIR: Okay, I think that’s the point that Harry was trying to make.

MR. REYNOLDS: Well it sounded better the second time but I’ll take credit
for both of them. Are there any other questions they want to ask? Okay,
excellent job, great morning, Judy, thank you to also for —

DR. WARREN: I have one request. I know we have a subcommittee discussion
scheduled for later this afternoon, what I’d like for people to think about is
what kind of testimony you would like to hear next on this because both of
these gentlemen have stimulated a lot of discussion and have mentioned a lot of
things and I know that Marjorie and Donna went to a meeting last week where
this issue came up again. So anyway, just be thinking about it so we can decide
what other testimony we need to hear on this topic.

MR. REYNOLDS: Okay, thank you very, very much, really good.

From a standpoint of the committee we’ve been thinking about dinner tonight,
Simon made some reservations at Magianos, could we see a show of hands of who
would be going? Okay. Good. All right, we will adjourn now until 1:00 and start
promptly at 1:00. Thank you very much.

[Whereupon at 12:03 p.m. the meeting was recessed, to reconvene at 1:04
p.m., the same afternoon, December 7, 2005.]

O N [1:04 p.m.]

MR. REYNOLDS: Okay, we’re back from lunch and preparing to begin our
afternoon session, it’s going to be made up of an e-prescribing panel giving us
an update, including Lynne Gilbertson, Tony Schueth, Laura Topor and Lynne
Gilbertson, so I guess we’ll go, what we’ll do is we’ll let everybody present
within the approved timeframe and then we’ll open the whole thing for
questions. So Lynne, since you’re on here twice we’ll let you go first and

Agenda Item: E-prescribing Update – Industry
Overview – Ms. Gilbertson

MS. GILBERTSON: Thank you very much. This is a status to the subcommittee on
the NCVHS recommendations to HHS on electronic prescribing for the MMA. I’m
just going to go through as I did before in the status is the different
observations that the committee proposed to HHS and give you a status on each
of the applicable observations.

The first observation is number three which dealt with prescription
messages, and this is to include the fill status notification functions of the
NCPDP SCRIPT standard in the 2006 pilot tests. And the status to bring you up
to date on this action is NCPDP Workgroup 11 e-Prescribing and Related
Transactions, our RXFILL Task Group created the implementation and operational
guidance to pharmacy and prescriber system participants for the consistent
utilization of the fill status notification transactions. These were added to
Script Standard Implementation Guide version 8.1 which was approved and
published in October 2005 so what they did was add more guidance to the guide.

Observation 5 which is formulary messages, HHS should actively participate
in and support the rapid development of an NCPDP standard for formulary and
benefit information file transfer using the RxHub protocol as a basis. This was
brought forward, it was vetted through the industry, and NCPDP Workgroup 11,
e-Prescribing and Related Transactions, had a Formulary and Benefit Task Group
led by Terri Byrne of RxHub, and they presented a standard for approval at the
March 2005 workgroup meetings, it went through the vetting process, the
balloting and the approval process and the NCPDP Formulary and Benefit Standard
Implementation Guide version 1.0 was approved by NCPDP and then ANSI and
published in October 2005.

Observation 7, Prior Authorization Messages, the recommendation to develop
prior authorization workflow scenarios to contribute to the design of the 2006
pilot tests and of automating prior authorization communications between
dispensers and prescribers and between payers and prescribers in its 2006 pilot
test. NCPDP Workgroup 11 has a Prior Authorization Workflow to Transactions
Task Group, which is led by Tony Schueth of Point-of-Care Partners, and
consists of X12N, Workgroup 10, Health Care Services Review co-chairs, HL7
representatives, and a lot of other interested stakeholders, and Tony will
present an update going next.

Observation 8, Medication History Messages from Payer/PBM to Prescriber, and
this was a recommendation of an NCPDP standard for a medication history message
for communication from a payer/PBM to a prescriber using the RxHub protocol as
a basis. The NCPDP SCRIPT standard implementation guide version 8.0 was approve
and published in July 2005 with ANSI approval in September and it includes the
medication history messages.

Observation 9, Clinical Drug Terminology, this was included, the
recommendation to include in the 2006 pilot tests the RxNorm terminology in the
SCRIPT standard for new prescriptions, renewals, and changes. The status of
this item, the Workgroup 11 formed a RxNorm Task Group in 2005, John Kilburn of
the NLM is a very active participant, the task group has participation from
pharmacy, prescriber, drug knowledge base, vocabulary and other interested
parties including HL7 representatives, ASTM, SNOMED, and obviously a lot of
industry participants. Initial calls have been spent in education of RxNorm
codes and the usage and beginning to work through the different questions that
have come up. Tony Schueth, Point-of-Care Partners, is the task group leader,
he loves to volunteer. The task group has contributed to the first sections of
an e-prescribing pilot guidance document which I’ll talk about under
Observation 13.

Observation 10, Structured and Codified SIG, the recommendation for NCPDP,
HL7 and others, especially including the prescriber community in addressing SIG
components in their standards. Workgroup 10, Professional Pharmacy Services,
has an Industry SIG Task Group which is led by Laura Topor of
PricewaterhouseCoopers and Laura will update you in this next session as well.

Observation 13 dealt with Pilot Test Objectives. The e-prescribing guidance
document I just spoke about a minute ago, the workgroup created this document
which will be a publicly available document on the NCPDP website for the
e-prescribing piloters. From a standards perspective the guidance document
offers information to assist with the uniform needs of the piloters. The
document includes guidance on the use of RxNorm, in the script, and the
formulary and benefit standards, as well as guidance on the draft code
qualifier values so that there’s uniform implementation of the transportation
of Rx-Norm codes in these standards. For example it tells the reader the exact
fields they should be using in SCRIPT and in formulary and benefit of where
they transport the RxNorm code and qualifiers of what to use.

The RxNorm Task Group is finalizing the first set of guidance information.
Other guidance will be added to this document from the Prior Authorization
Workflow and the Industry SIG Task Groups as soon as it’s ready. This document
is intended to be for the piloters, now they won’t be announced until sometime
in December and the word is very late December, but it’s intended since there
are transactions that are named in the RFAs for the pilots, that there are some
things that aren’t don’t yet or aren’t fully vested through an implementation
guide and we’d like the piloters to all be using the same codes, the same
fields, whatever it is, or at least even know where to go get the information,
for example what RxNorm codes to use, whether you get it out of the UMLS or you
get it out of a zip file or things like that that will help the piloters if
they need that kind of guidance.

Workgroup 11 in November also approved the creation of a task group of
interested parties to work through other suggestion that might come in as part
of the pilots. These suggestions will become updates to this guidance document.
And as guidance is vetted through the pilots requests for formal adoption into
the NCPDP standards will then be submitted by the industry, so those become the
norm that’s being used.

Also related to electronic prescribing, an update I wanted to give you on
the long term care work, NCPDP Workgroup 14, Long Term Care, has formed task
groups to work on the needs of this sector, especially in light of the MMA.
Some of the information to give you, they have a task group called LTC
Consultant Pharmacists, this task group is creating a standard for the
consultant pharmacists and their software that would interface with electronic
prescribing and adjudication systems.

They have a task group working on long term care current billing issues,
they are working to address issues such as post consumption, split billing,
infusion billing after change in status, place of service code, coordination of
benefits, etc., a lot has to do with the Medicare Part D requirements on this
sector of the industry. They have completed the first round of recommendations
which went into our version five editorial which is a guidance document for
implementers of the telecommunication standard version 5.1. That’s been a bit
of a challenge because with HIPAA frozen in this 5.1 telecom standard being
named and Medicare Part D not even being thought up back in ’96 that we’ve had
to come up with some very creative solutions to address industry needs by being
constrained by the current standards and not being able to move versions and
support the better solutions rather then kluges.

They also have a Long Term Care EHR/HL7 Task Group, this group is really
exciting to be on calls, they are running on caffeine, they are so great at
getting things done, out to the task groups and just creating wonderful
documents. They are collaborating with HL7 on the long term care pharmacy needs
for the electronic health record and how it relates to the minimum dataset
requirements and the drug regiment review. They are working on the data
definition work between the CPOE and the nursing home system. They are finding
complexity in managing patient and prescription identifiers, where have we
heard that, between the three systems, the COPE, the nursing home, and the
pharmacy. Work Group 11 is assisting this task group with understanding the
current e-prescribing functionality that’s out there and to provide assistance
incorporating the long term care needs into the e-prescribing standards. And
they are also working on a refill renewal process for long term care.

The Workgroup 14 and then their task groups have pulled expertise from
across the long term care industry from organizations, standards bodies,
interested participants. They have many challenges but they have built a solid
foundation and they are really going to town.

And the last topic on e-prescribing is versioning but I’ll address that as
the fourth section of this section.

Thank you very much.

MR. REYNOLDS: Okay, thank you Lynne. Tony?

Agenda Item: E-Prescribing Update – Prior Authorization
Task Group – Mr. Schueth

MR. SCHUETH: My name is Tony Schueth and I am the task group leader for the
Prior Authorization and Workflow to Transaction Task Group. As Lynne said my
background is that I’m currently the founder and the managing partner of
Point-of-Care Partners, which is a boutique consulting firm. My background is
I’ve been involved in electronic prescribing for about ten years, for five
years I was employed by a pharmacy benefit manager, five years prior to that I
was employed by a software company. And as my consulting, about 80 percent of
our business is related to electronic prescribing so just about every
stakeholder in electronic prescribing we’ve done work for. I’m very pleased to
give the update today on the absolute amazing progress that our task group has
made relative to standardizing automating prior authorization.

So the task group was formed a little more then a year ago, and in fact at
this time last year we hadn’t even met, so the progress that we’ve made has
been in my view pretty significant. We’ve made an amazing amount of progress
for a couple of reasons, one is we just got some great volunteers, some people
that have really put forth an amazing amount of effort. I think that there’s an
industry need right now but I’d like to also acknowledge that we’ve received
two very small grants from AHRQ that have kind of helped our process and I
would strongly advocate this as kind of a model for standards development
because sometimes in the standards development process what happens is you can
get things, things can get kind of bogged down because maybe there’s a lot of
kind of tedious work that maybe needs to be done and it’s tough for volunteers,
to get volunteers to do this kind of work.

So let me give you a specific example related to prior authorization. We
needed to do some analysis, we had collected 350 forms, prior authorization
forms, and we needed to do some analysis of that. Well getting any one of us
that has a full time job to do something like that would have been an amazing
challenge, so what we did is we found a gentlemen who is straight out of
pharmacy school who had the skill set but hadn’t started a full time position
and we were able to contract with him for a very reasonable rate to kind of do
this work. And we found that to be just really helpful to our process.

So with that said our objectives were to identify the standards required to
support e-prescribing and the electronic delivery of PA related transactions,
and to understand the PA workflow in the physician’s office, plan, pharmacy,
and also in long term care.

I’ve presented this slide before, I talked about it with the task group and
we decided that this can’t be said enough. And it’s our philosophy and I want
to quote this for the record, that this is a quote from one of our task group
members, that this is not an attempt to usurp the coverage decisions of the
plans but instead it’s an effort to streamline and standardize the mechanism
for the activity. I think early on some people might have been confused about
the objective of the task group and so we don’t think that that can be stated

This particular slide I’ve shown as well in the past and the important kind
of take aways from this slide is that the task group is composed of not only
NCPDP members but also folks from HL7 and X12, so it’s a multi-SDO task group.
We have representation from across the industry, physicians, pharmacists,
nurses, technical folks, business folks, we have representatives from all the
different stakeholder organizations as well, managed care organizations,
payers, health plans, pharmacy benefit managers, software companies, you name
it we’ve got representation. And in fact the group is so representative that
HL7 was recently able to kind of cut a step in their process of approving one
of the related standards that I’m going to talk about in a minute because the
industry was so well represented.

Now before I get into this I’d like to do a couple definitions, I know we’re
limited in time but if you’d please indulge me I’d appreciate this, because I
think some of the confusion has stemmed from sort of maybe not everybody having
the same, attributing the same meaning to different words. So status I think is
pretty simple, I mean a drug is either on prior authorization or not, that’s
what we talk about when we talk about status. Criteria, the American Academy of
Managed Care Pharmacy defined criteria as systemically developed statements
that could be used to assess appropriateness. And a rule is a code or set of
codes governing action or procedure.

And so what I’d like to do is we’ve got a form and I’d like to just make
sure that everyone, that I point out that criteria would be patient
information, information about the medication, things like that. It might even
include information about prior medications that the patient is on. But a rule,
an example of a rule would be osteoarthritis and osteoarthritis must meet
certain criteria below in 4a, 4b, or 4c. You can see that here’s, so they put
the rule in the particular form.

Now last time I testified that was one of the things that was asked about
which was how often did that happen and my testimony was more often then not
the rule wasn’t out there on these forms. In this particular case it is, I
didn’t pick this to make a point, this is a slide that we’ve used actually in
the past.

What I’d like to do is just a little level set and just talk a little bit
about the proposed flow for a second. So what’s going to happen is the payer
still would determine the status, criteria, and rules for prior authorization.
Drug can be flagged as requiring prior auth and simple rules applied and this
can be transmitted via the formulary and benefits standard.

So when the patient comes to visit the physician and the prescriber writes a
prescription they’re going to see the status as requiring prior authorization,
they’re going to complete, actually what’s going to happen is, at least the way
we envision this that’s feasible for the pilots, is that they would launch a
278 which would be an indication or a notification to the plan that the
physician would like to request prior authorization. Then in a response back, a
275 plus a PA attachment, they would get the criteria, a series of questions
that they would need to have filled out. And then the physician would then
respond again with the answers to these questions and that information would go
back to the plan where the plan would make a determination as to whether they
were going to grant prior authorization, deny it, or accept additional, or
request additional information.

So ultimately if the approval is granted a number goes to the prescriber and
then in SCRIPT, the SCRIPT already has a field for the approval number to be
transmitted to the pharmacy, and then from the pharmacy the claim is submitted
to the payer and telecommunications also has a field that allows, that
accommodates that approval number.

MR. REYNOLDS: I’m asking a clarification on a question, you’re saying that
part of what you’re doing is the 275?

MR. SCHUETH: Yes, yes it is.

MR. REYNOLDS: And that’s a regulation that’s, that’s the claim attachment,
and the NPRM is not let yet, again, I don’t want to get into a discussion, I
just wanted to make sure that’s what I heard.


MR. REYNOLDS: Fine, then please continue.

MR. SCHUETH: The plan is to leverage existing standards and I would like to
make another point on this particular slide that’s related to what you’re
asking about, Harry. The SCRIPT and telecommunications already accommodate
prior authorization so where the task group has spent the majority of its time
has been on the 278 and the 275 plus the PA attachment, that’s where we have
spent the most of last year is working on those two pieces of it. We’ve
obviously spent, I talked earlier about normalizing data, normalizing the
criteria that we’ve received, that applies to some other areas as well, but
this is where we spent the bulk of our time.

Some of our accomplishments are the following, we’ve mapped the ambulatory
PA workflow. We’ve leveraged the AHRQ grant that we talked about before to
complete the analysis of these forms, there were 350 forms, 1750 questions from
53 different payers. We created a database to support this. And then from that
database we obviously had many ways that different plans would ask kind of the
same question so when we talk about normalizing that’s what we mean. So what we
decided to do was that we were going to normalize that and we focused on six
therapeutic categories. We did an analysis and we found that the industry, that
there are six therapeutic categories that the industry kind of most frequently
applies prior authorization to so we focused on those and those are erectile
dysfunction, anti-fungals, growth hormones, NSAIDs and Cox2s, PPIs, and Opoid
Agonists, and so those six therapeutic categories, we normalized that and
that’s kind of what we’re working with at this point. We also formed a separate
task group to address PA in long term care, and we’ve mapped the long term care
PA flow.

Now this next slide is a timeline, I’m not going to go into a lot of detail
about it. The bottom line, it’s public record the fact that I’m submitting
this, the bottom line though is this, we are in a position right now where the
278 and the 275 are ready to be piloted today. The PA attachment that we talked
about, which we worked on through HL7, that will be available to be piloted by
the end of January after the HL7 meeting in January. So by the end of January
we will be in a position where we could pilot test kind of all of these.

On the printout you all received there is a next steps and the next steps
are the following. That we need to sort of complete the ballot of the HL7 PA
attachment, at that point what we really need to do is we really need to pilot
test all of this. Now the CMS URX pilots for next year include prior
authorization but they only include the 278 and what the RFA said was latest
versions of the 278. now the 278 is a HIPAA named transaction, HIPAA named
version 4010, 4010 was designed for services and procedures, not for
medication, they had work arounds. The 278 that we’ve been working on now is a
version 5010, so what we did is we fixed the work arounds but the notion was,
the contemplation was, they’ve been working on our task group and they realized
all the information that needs to go back and forth between the plan and the
provider, what they decided is instead of building out the 278 so that it would
accommodate all that that they would have it work with the 275 plus the PA

Now the RFA for the pilots January 1st do not ask for the 275
plus the PA attachment, they only ask for the 278. So as they exist today, as
they stand at the time that those of us that submitted, coalitions that
submitted our application, all you can do is the 278 and very rudimentary prior
authorization, and by rudimentary I mean things like age and gender limits,
those kinds of things.

MR. BLAIR: Will they get that clarified by asking CMS and AHRQ about whether
the pilots actually will be limited by the RFA?

MR. SCHUETH: Well, unfortunately Maria isn’t here today, she’s well aware of
this, and Karen had lunch with Barbara McKinnon from Point-of-Care Partners and
I and she said that the 278 in the latest versions that was sort of a
requirement for the application meaning that there would be a possibility, the
door is open here that they could do more then the 278 but the challenge is
this. I was a part of a coalition and my coalition, we scoped out doing the 275
plus the PA attachment to go with the latest version of the 278 and what we
found was that put us over the ceiling of the $2 million dollars, so the RFA
had a ceiling of $2 million dollars, no coalition could go over that, and we
found that that put us over that ceiling. And so the point is I can’t reveal
publicly, but the point is that it’s an expensive proposition to do this 275
plus the PA attachment.

MR. BLAIR: Let me save it for the question time but thank you.

MR. SCHUETH: And one other presentation that we wanted to do, so our task
group has been very actively involved in as I said the 275, X12 and HL7.
There’s a sort of a subgroup, an ad hoc subgroup of us that got together to
address another challenge, obviously I just went through this flow, the whole
notion of completing a structured Q&A has been sort of on our to do list,
on the task group’s to do list, and we kind of felt like the task group had
their hands full working on the HL7 and the X12 kind of stuff and so what we
did is we formed this little ad hoc group, some of them are on our task group
and some of them aren’t, to look at whether, look at different ways that we
could address this notion of completing a structured Q&A.

And what we did is we submitted an application to AHRQ, another small grant
to sort of fund an analysis of GELLO(?) versus prior authorization. What we did
is, and by the way, this is the reason also that I’m using Point-of-Care
Partners letter template to do this presentation instead of NCPDP because it
was outside of NCPDP. Our project began on October 3rd, was
completed just three weeks ago November 20th, what we did is we
really did three reports. The reports were the following, we decided that we
would do an analysis of the HL7 RIM versus the six normalized therapeutic
categories that I just talked about in the last presentation. We’d match HL7’s
RIM versus these GELLO expressions for six normalized therapeutic categories.
And then we would develop a simulated model of web based GELLO offering and
delivery tool.

Now the project team included myself, Barbara McKinnon from Point-of-Care
Partners, Ross Martin who’s in the audience today, was an unpaid volunteer, and
then we had some experts, two folks that were experts at GELLO, the Guideline
Expressed in Language that I’m going to talk about in a second, Dr. Sordo(?)
and Dr. Grenis(?) from Partners Healthcare, and then a gentlemen, a physician
by the name of Robert Dunlop from a company called InforMed(?), looked at HL7,
the HL7 RIM piece of all this.

I probably should have started with this, let me talk a little bit about
what GELLO is, I know some of you are very familiar with GELLO, maybe not
everybody. GELLO is Guideline Expression Language and it’s not a true acronym
obviously. It’s an ANSI accredited standard that’s intended to be a query
language and an expression language, and that’s why the whole idea of the prior
authorization case study made just a lot of sense to us. It is like I said an
ANSI accredited standard, the specifications were developed in HL7 in the
Clinical Support Task Group and prior authorization was recognized as a use

So where does GELLO fit into prior authorization? Well first it’s important
to state again plans and PBMs will continue to designate which drugs require
prior authorization, define the criteria and determine the rules. But GELLO
could help in encoding existing PA criteria to create machine computable data
that can be shared between payers and providers in a standards based
interoperable fashion. It could facilitate the automatic extraction of PA
criteria from within the EMR to pre-fill answers to questions so that the
doctor or his or her staff doesn’t need to re-key information. And it could
also support the prescriber’s answer to the PA criteria being transmitted in a
standardized message transaction.

Now what were our findings? Well what we found was first that HL7 compliant
systems can accommodate PA although there would be some refinements that would
need to be made. The HL7 GELLO expression language provided a consistent
framework for a standard representation of the PA criteria. When coupled with
the HL7 RIM GELLO could support the sharing of information across applications
from a provider system to a payer system and GELLO plus the HL7 RIM allowed
integration of SNOMED, RxNorm, and ICD-9. Also the models that we simulated
demonstrated really a more standardized process and a comprehensive approach to
best practices.

What I’d like to do now is just, so this ad hoc group sort of talked a
little bit about how do you fit all this into one sort of view of the future
related to prior authorization. And so what I’d like to pose is a potential
kind of answer to that and I call this next slide a potential standards based
solution to prior authorization. First of all there’s the structured product
label that of course the FDA has been working on. What the structured product
label could do is include GELLO expressions, structured logic, fully computable
GELLO expressions that are human readable. For example this drug is
contraindicated in this particular situation. Then third party entities such as
medical societies, expert driven, consensus driven expert panels, could use
GELLO expressions in the SPL to build model clinical guidelines. Then players,
plans and PBMs could use these guidelines as prototypes for their own PA
criteria aligning them with their plan benefits.

Clinical software vendors could download PA criteria into their delivery
tool. Standardized machine computable PA criteria could be presented to a
physician after being alerted that a drug requires PA. Answers to PA as I
mentioned before could be pre-filled by the EMR software, reducing the
possibility of error, minimizing time required to complete this and shortening
turn around time. Answers to questions and other justification can be
transmitted electronically to the payer or if the payer so chooses like the
example that I showed earlier where they didn’t mind having the rules in the
physician’s office, if they chose they could generate an answer immediately and
present to the physician immediately, an approval, a denial, or even a request
for more information. But ultimately payers, plans and PBMs could calculate
their responses, via whatever process they choose, including electronically.

Now what are the next steps with GELLO? So the next steps for GELLO are, for
it to be successful what we need to do is we need to complete analysis of GELLO
versus the basic HL7 query mechanism. We need to develop it as an open source
compiler or interpreter to translate GELLO queries and expressions into
executable statements for other programming languages. We need to develop HL7
RIM compliant GELLO interfaces to commonly used databases such as Oracle, SQL,
and DB2. We need to test GELLO in a live clinical care setting to ensure that
it’s capable of accurately executing clinical decision support rules and
extracting information from HL7 RIM compliant systems. We need to develop
prototypes of the authoring and the delivery tool based on the designs that are
described in the report that we completed. And ultimately this whole thing
needs to be piloted in the same way that the transaction component of this
needs to be piloted.

And that concludes my testimony, thank you very much.

MR. REYNOLDS: Okay, thank you, Tony. Laura?

Agenda Item: E-Prescribing Update – Codified SIG Work
Group – Ms. Topor

MS. TOPOR: Just to walk through a quick overview, recap some of the work
we’ve done to date, the structure, next steps, I think there’s going to be some
discussion about code sets and then to just highlight who’s been a part of this
and helped pull all of this together.

Standard SIG in one shape form or another has been discussed for about ten
years. With the industry changing we never quite knew who was going to be
there, what was everybody going, what approach would they take. NCPDP had tried
before, ASTM with the Continuity Care Record had tried, HL7.

We went into this with operating assumptions that it needed to be flexible.
We originally said 80/20 rule and the more we got into this we realized that
what we came up with is a 99 percent and we’re leaving the one percent simply
for the things we don’t know are out there yet, just waiting for some new form
of medication dosing instructions. We wanted it to work across the care
spectrum, inpatient, outpatient, long term care, whatever the situation might

And then I added one based on a recent call which is that we have assumed
all along, my task group, that we’re the only recognized SIG standard
structure, we don’t know that somebody else is out there, we haven’t heard of
anything but there was just concern that somebody else that we don’t know about
is out there trying to pull something together to also do this and if they have
clearly its been in a bit of a vacuum because NCPDP has been at the table, HL7
has been at the table, the folks through the CCR have been there, so we don’t
think there’s anybody out there but just in case. Again we’ve got approximately
110 people who’ve been a part of the task group, about 25 are pretty active and
the list is at the end so you’ll see that.

And goals and objectives were again to conform with existing e-prescribing
scenarios, we didn’t want to duplicate efforts that were happening anywhere
else, take advantage of all of the work that people have done and all of the
knowledge that they have so far, and time again the flexibility and the
interoperability. This needs to fit within NCPDP, it needs to fit in HL7, it
needs to fit into CCR.

So we’ve done that, when I was here in July I walked through all of the
various segments, this is just to quick remind everybody we have segments for
dose, dose calculation, dose restriction, the vehicle, the route, the site, the
frequency, interval, administration time, the duration, stop, indication, and
free text. So we think we captured everything.

We started our calls in October of last year, thank you to CMS for their
sponsorship of those calls, we’ve had those every other week. I think we’ve
only canceled about three of them so it’s been very productive. We have met six
times at face to face meetings, the bulk of those have been in conjunction with
NCPDP workgroup meetings and we’ve had other, one other meeting and we’re
meeting again in a week and a half.

Obviously we’ve been working with the other organizations, we have now
developed the format, drafted the implementation guidance document, and we did
confirm that what we’re doing is in conformance with ASTM/CCR, so we’ve done
all of that. And I also just want to thank Michael and AHRQ for their support
as well so that has been appreciated.

In November at the NCPDP meetings we presented the format content and the
draft implementation document. Workgroup 10 which was the parent for this task
group approved that and said great, go ahead, looks good. It moved to Workgroup
11 which is the owner of the SCRIPT standard for e-prescribing and they pended
their request for approval which is what we were hoping would happen, and they
have now formed a task group to take all of our work product and figure out how
to make it make sense in the SCRIPT vernacular. So if anybody wants to
volunteer for that please let me know, we’ll take all the help we can get

In terms of our next steps we need to create a subset of the SNOMED codes to
be used which will be based on the task group recommendation and I’ll get into
that in a little bit more detail. We are meeting back here in D.C. on December
19th to try and work through that. We then need to continue working
to incorporate the content and the implementation guidance into NCPDP SCRIPT.
If you look at what a pilot timeline might be the guidance documentation will
be available by the end of March so that coding could begin in April. Obviously
the testing timelines will be determined by those participating in the pilots.
And then the guidance documentation adopted by other organizations.

If we look at the NCPDP timeline the derf(?) which is the submission request
that we go through, hopefully will be approved at the workgroup meetings in
May, the ballot will go out in June, adjudication of the ballot will occur at
the August workgroup meetings. If there are comments received the ballot will
be recirculated in October, final comments at the November workgroup meeting,
and there is a one month required appeals period and then board approval by
January ’07, so that’s taking it through the full ANSI accreditation process
that we would through NCPDP. I would expect parallel activities to be happening
through HL7 and ASTM/CCR as well.

The other discussion point that the task group has had is that the core
content or work product if you will because we haven’t quite decided what we
want to call it will need to be maintain in its generic form so if there are
needs for additional maintenance, requests for enhancement, it needs to go
through that process first with the original work and then the modifications
would need to be submitted as a request to change SCRIPT, to change HL7, and to
change the ASTM CCR standard. So we are keeping the integrity of that original

Code sets have been as we all know quite a discussion point and a challenge
for us as we’ve tried to figure out how to do this. So where are we going, how
are we going to get there, is anybody going to know when we get there, and so
what I wanted to try and frame up here today was really some of the issues and
topics that we’ve talked about within the task group and focus it within the
context of what we needed to accomplish in order for a SIG standard to be
available for use within the industry, really help you understand the path that
we followed to get to that. So what were the goals, what are the reasons for
use, who are suppliers, who are customers, and then ultimately what it meant
selfishly for us to the standard SIG development.

If we look at current state there’s multiple code systems and vocabularies
out there. There’s differing users and purposes, there’s differing audiences,
and there is a need for a strategic nationwide organization and direction on
the entire topic of code sets. And I wanted to emphasize nationwide because I
had a discussion with Scott Robertson who’s been our HL7 representation on my
task group and he said obviously HL7’s perspective is that would it also need
to be international but first steps and what’s within the realm of possibility
is to say okay what can we do nationwide and then continue the collaborative
work with the other international organizations to make sure that we don’t have
two trains colliding or anything.

Within the goals, obviously the consistent use of the terminology, does it
mean the same thing to everybody, is it interoperable, can everybody work with
it in their various settings. It needs to be something that can be effectively
and efficiently implemented and given the number of players and the number of
customers that are out there that’s going to be a challenge. And Lynne’s
probably chuckling because I put my little patient safety soapbox back on
there, at the end of the day this to me is why we’re trying to do all of this.

In terms of why use a code set, obviously it’s prescribing and ordering
drugs and supplies, it’s compliance with regulatory and contractual
requirements, whether that be the SPL for purposes of the FDA, whether it’s how
you’re submitting something to a payer for reimbursement. With all the
discussion obviously with electronic health records we need it there, there’s a
tremendous amount of research that occurs and so much more could probably be
done efficiently if there wasn’t this constant mapping back and forth and what
does that code mean and who’s using what, and basic business operations which
covers the entire realm of possibilities.

When we looked at the suppliers that are available today, we looked at the
structured product label, we looked at RxNorm, we looked at SNOMED clinical
terminology, and we looked at HL7. I simply went out to websites and pulled
this information so the SPL is defined as a document mark up standard approved
by HL7 and adopted by the FDA as a mechanism for exchanging medication

RxNorm provides standard names for clinical drugs, the ingredient, strength
and dose form, and for dose forms as administered to a patient. It provides
links from clinical drugs, both branded and generic, to their active
ingredients, drug components, and related brand names.

SNOMED International is advancing excellence in patient care through the
delivery of a dynamic and sustainable scientifically validated terminology and
infrastructure that enables clinicians, researchers, and patients to share
their knowledge worldwide across clinical specialties and sites of care.

And then the HL7 Vocabulary Technical Committee is trying to provide
organization and a repository for maintaining a coded vocabulary so that if
it’s used with HL7 and other standards it allows the exchange of clinical data
and information so that sending and receiving systems have a shared well
defined and unambiguous knowledge of the meaning of the data transfer.

So you look at all four of those and you say okay well, who needs it?
Government needs, providers need it, whether it’s a hospital setting, a clinic
setting, a pharmacy, the various vendors need it, whether that’s the drug
database vendors, systems, EMR system vendors, academic medical centers and
researchers, the manufacturers, and then ultimately the standards organizations
need to know what codes are out there.

If you look at what we’re trying to code within the SIG standard it’s dose
form, route, site, frequency, interval, vehicle, indication, and administration
timing. And again with the perspective of what is the prescriber needing to
communicate to the patient on how they will take their medication.

So what I’ve pulled together, because I needed something visual, was really
just a quick grid of who has codes for what we’re trying to accomplish. SNOMED
on dose form really is the base across HL7, RxNorm, and SPL, so we know that
that ties out across all four. Within route SNOMED and HL7 are pretty close,
HL7 has phase changes so that if something is in a powder form and it’s turned
into a suspension and it converts to a liquid they can accommodate that. As a
prescribing physician that’s not really something that’s part of the mindset,
they just know they want the amoxicillin suspension. But RxNorm and the SPL are
in sync for route.

And then going through the rest of them SNOMED and HL7 because of their
close collaboration, they both have site, frequency, interval, indication, and
vehicle. Strength is not an element that’s necessary within the concept of SIG
but it is something that’s critical in other pieces of e-prescribing and within
RxNorm and SPL but again from the SIG perspective strength is not key because
it’s not the dosing instructions for the patient. And then administrating
timing, available within SNOMED and HL7, not available within RxNorm and the

So as I said SNOMED and HL7 are generally in sync, the SNOMED codes are
hierarchical which allows for a great deal of flexibility. I mentioned the
phase changes. RxNorm and SPL route codes include more detail then SNOMED,
often due to research needs, one example is SNOMED will say IV, that is the
route, RxNorm and SPL might take it further to be IV piggyback which in terms
of SIG becomes an administration timing, not a route, so we looked at that. And
then RxNorm which is meant as a standard for clinical drugs and their dose
forms may not always represent what exactly is ordered, prescribed and
dispenses, so clemdomiason(?) 150 milligrams comes in a six ml and a ten ml
vial, RxNorm is going to reflect clemdomiason 150 milligrams were milliliter,
not the vial size. The vial size becomes key from a business operations
perspective so we’re just trying to look at all of those components.

So then I said okay well as we round and round and round and over the 20 odd
calls that we’ve had and the meetings, why SNOMED, how did we end up here? We
looked at the timeline and the desire to have a SIG standard available for the
pilots in 2006, we looked at content. SNOMED has almost everything that we
believe we need and then some and part of what we will accomplish in our
meeting here on the 19th is to look at the SNOMED set, the codes, in
detail, determine out of the 500 codes that they might have for site okay what
are the 50 that we really need and then identify anything that we need that
they don’t have and they’ve agreed to build that for us. And they’ve agreed to
maintain these subsets.

So from an implementation perspective yes it’s SNOMED but here’s going to be
the pieces of it that you need, you don’t need all of the codes. Availability,
people realize it’s available within the United States at no cost, it is a CHI
approved standard.

So that was where the task group came down.

DR. STEINDEL: Point of clarification, it’s not a CHI approved standard for

MS. TOPOR: Okay, thank you.

When we get to interoperability and this is really where it again begs the
question from the perspective of the task group, the work we were charged to do
was come up with a way to codify or standardize the dosing instructions within
an electronic prescription. We feel that we have done that work and that we can
accomplish that using SNOMED at the very least for the pilots. I think the
discussion that still needs to occur is then okay, back to the broader context,
what should this be? Is UMLS going to be housing the FDA terminology? Is NLM
the logical resource to own an interoperable of route and dose form concepts of
what the FDA has, what HL7 has, what RxNorm has, what SNOMED has? And then what
can SNOMED do to assist with mapping outside of that?

So again, I really wanted to put this out there and say we’re trying to
build —

— [Laughter.] —

— and then what I wanted to do was just take a minute and really
acknowledge the members of the task group. You’ll see their names listed, those
in bold are the ones who’ve been primarily active participants, doing a great
deal of work, but to Tony’s point earlier, we’ve got everybody in here, we have
systems vendors, we have the pharmacies, we have the physicians, we’ve got the
knowledge and the drug database vendors, we’ve got the standards organizations,
we’ve got the government, they’ve all been there, been a huge part of this,
done a tremendous amount of work and so I just wanted to make sure that they
got their public recognition for everything that they’ve done.

MR. REYNOLDS: There was a suggestion that that be the subcommittee’s logo.

MS. TOPOR: For a small fee I’m sure I can arrange that.

Agenda Item: E-Prescribing Update – Industry
Version Recommendation – Ms. Gilbertson

MS. GILBERTSON: All right, if Laura hasn’t give you enough to think about
one more topic, related to e-prescribing, and this is a request from NCPDP, it
was discussed during the November workgroup meetings after the final rule had
come out on electronic prescribing and is a request of the committee brought

The first is the letter which the workgroup put together, they tried to
stick to a couple paragraphs and that was it and then —

MR. REYNOLDS: Lynne, do we have something in front of us?

MS. GILBERTSON: You should have —

MR. REYNOLDS: What does it look like?

MS. GILBERTSON: Yes, that’s correct, you should have a letter from NCPDP on
NCPDP letterhead and then you should have an eight page back-up document as

MR. REYNOLDS: I don’t think we have that.

MR. BLAIR: Wasn’t it sent on Monday.

MR. REYNOLDS: We have the attachment but not the letter.

MS. GILBERTSON: It appears to be coming, I’ll hold on a second until you get
copies. And the second document is an eight pager that starts with NCPDP SCRIPT
Standard Implementation Guide version 8.0 approved July 2005.

All right, this is from the letter which was submitted to the committee.
Dear NCVHS and Maria, the 42 CFR Part 423 Medicare Program; E-Prescribing and
the Prescription Drug Program, Final Rule, names NCPDP SCRIPT Standard Version
5.0 as the foundation standard version of SCRIPT. The industry is already
moving to SCRIPT Version 8.1. SCRIPT Version 8.1 has been fully balloted and
supports all the functionality of 5.0 while supporting additional
functionality, including medication history.

It is the consensus of the NCPDP membership that CMS must name SCRIPT
Version 8.1 for e-prescribing under MMA instead of Version 5.0 and specify
functions that are required in the foundation standard, for example excluding
medication history until it has been fully tested as the rule states. By
adopting the NCPDP recommendation trading partners can move forward with higher
versions so long as they still support the foundation standards and are able to
accept any message based on the foundation standard from a trading partner that
has not moved beyond the foundation standard. In addition the NCPDP
recommendation will allow the industry to make progress in testing new
functionality while still supporting minimum standards for e-prescribing.

NCPDP stands ready to assist CMS in bringing together industry stakeholders
involved in the current prescription processing for the smooth inclusion of the
Medicare Prescription Drug Benefit Program. Sincerely, Lee Ann Stember on
behalf of NCPDP.

MR. REYNOLDS: A clarification again, it seems to be saying in paragraph two
that it’s pretty much the same except for medication history, five and eight.

MS. GILBERTSON: Correct, that is correct.

MR. REYNOLDS: But then why did you restate, why did you state in A and B? In
other words you’re saying, I guess I read in A it still supports the foundation
standard functions, you’ve already said that it included, what’s the point of
highlighting A and B?

MS. GILBERTSON: Because the rule was clear about what functions within
SCRIPT were supported and so we wanted to make sure to call out that that was
still the intent —

MR. REYNOLDS: So you’re saying if somebody uses 8.1 they still have to do
all the foundation, that’s what you’re saying.

MS. GILBERTSON: Right, like what was named in 5.0, because medication
history is in Version 8 and above we wanted to make sure it was clear that we
weren’t saying name medication history as a foundation standard since the rule
was clear, the viewpoint —

MR. REYNOLDS: So foundation standard is a bit of a term of art in that.

MS. GILBERTSON: Let’s say we used the terminology we were given, yes.

And then based on this information we were asked to provide further
documentation about what the movement from SCRIPT 5.0 to the Version 8, and
Version 8.1 specifically, and provide some information about the comparison,
the benefits, things like that as more information rather then just a formal
request to say name it and don’t give you any back-up. So the eight page
document that accompanies it is the request of, fulfilling that request that
was asked of us and we did it in a couple of different ways so that hopefully
it will be clear, I apologize for the verbosity but we wanted to make sure it
did, we did provide as much information to help understand and in one
particular place we included the chart that was requested.

So just to give you some background, NCPDP SCRIPT Standard Implementation
Guide Version 8.0, which was approved in July of 2005, we added charts to help
the implementers and there’s an example on page four. And from the Imp guide
1.A gives you information about the section, specific segment discussion was
modified to provide implementation references to the supported transactions.
For each segments that’s used within the standard columns have been added for
the transactions that use the segment. Values of mandatory conditional,
conditional, mandatory, and not used occur according to the standard format. We
also made a clarification, the PVD segment, which is a standard provider
segment, was used for prescriber, supervisor and pharmacy, and we split those
out just to make it clearer for the implementer.

The charts were built into the document in Version 8.0 but the columns only
reflected the standard format. This was to give the task group and then the
workgroup time to incorporate the guidance into the chart format. And also in
SCRIPT Version 8.0 medication history transactions were added. Medication
history used these charts since these were new transactions being brought

A little explanation about the term standard format that I’m going to go
through in one of the charts. SCRIPT is based on UN/EDIFACT syntax, so the
segments and fields are defined in a standard fashion. This is what the
standard format column that is on the charts, but for specific transactions or
implementations there are business rules that must be applied and so the other
columns for the other transactions were added to the chart. It’s kind of the
baseline and then here’s the extension for the business case.

Now onto SCRIPT Version 8.1 which was approved in October 2005. Version 8.1
provides more clarification on the use of the fill status notification
transactions. This information in item one is out of the implementation guide.
Clarifications include the use of the term dispensed, not dispensed and
partially dispensed instead of the term filled, not filled and partially filled
that was used in previous versions. The transaction did not change just the
wording of the intent since these transactions were always intended to be at
the dispensing action and that’s how they were being used, not at the filling
action, but the original wording had been retained. And we reported that this
recommendation was reported to NCVHS as having been done.

And based on review by the task group as I mentioned earlier and then by the
workgroup the specific segment discussion was modified to site for each
transaction applicable to that segment the specific field usage of mandatory,
conditional mandatory, conditional or not used. And where the specific field
usage is different then the standard format the item was cited in bold italics
just to provide more clarification for the reader.

Now as you look at the bottom of page one under Specifics of Item #2, the
Specific Segment Discussion Changes, this is just to show you an example of
what we were talking about, the clarification that was made. So if you look
just for example in the UIB segment which was used to carry the payload,
identify the sender, identify the receiver, down the third row, the level one
identification code qualifier, the standard format is conditional in the
EDIFACT syntax. The clarification, which has always been in SCRIPT is this
field is mandatory, so that’s the difference between what you’ll see as a
standard format column and then what becomes the individual transaction
columns. And that continues on page two.

And if you move to page three I tried to show you an example of what was
modified in SCRIPT 8.0, this is the drug segment, just a small excerpt from it,
and it just shows you the field number column, the field name and the remarks
have always been there. The standard format once again as I mentioned that just
shows the reader what the original standard guidance was, and if you look at
all the different transactions that use this particular DRU segment all it has
is copied data, a C occurs all the way across and that’s the conditional versus
conditional mandatory, things like that, unless we had specific guidance these
are just exact. And this was just as I mentioned earlier just to set up the
guides so that the next version could report the specific rules once the task
group and workgroup had approved. We just needed some time to get that all

Then if you turn to page four the DRU segment, now this is the excerpt
that’s in 8.1 so the task group and the workgroup had completed their work, put
it through the ballot process and got it approved. Just a brief example, the
020 level, the quantity composite, that first row there, the standard format
says it’s conditional but the usage in the industry has always been this field
is mandatory for the transactions that are shown with an M, so all this was was
to actually put this in a chart format to allow the reader more guidance rather
then reading a verbiage somewhere or a different kind of chart this was to show
the transactions against the segment and their usage. So that’s that

Then we provided, this was the chart that was requested showing the
comparison or benefits of SCRIPT Version 8 to previous versions. The first
feature is the charts of the segment and field usage which is what we’ve been
discussing, they were added in 8.0 and enhanced in 8.1. The segment and field
usage enhancement is vital to provide consistent and accurate utilization of
common data elements across the over growing SCRIPT Standard. The final
enhancements to this guidance available only in 8.1 provides in a can’t miss
chart format the correct usage of each field in normal implementation. To
implement in versions earlier then 8.1 without this guidance would increase the
likelihood of incompatibilities and require each user to troubleshoot issues
that have already been documented and resolved in this section.

SCRIPT 8.1 provides the most guidance to date to assist implementers with
field and segment usage per transaction type. The implementer codes once and
receives the benefit of these rules.

Guidance and clarification on the fill status notification, once again was
added in 8.1. The guidance and clarification provided only in Version 8.1 is
specifically to ensure users of the fill status notification transactions and
upcoming pilots do so in an accurate and consistent manner. Without this
guidance each pilot and future production users may implement these new
transactions in incompatible ways and make the data collected from the pilots
incompatible and of far less value. One again the implementer codes once.

The medication history transactions added in 8.0 and also supported in 8.1
obviously, 8.1 includes the medication history transactions which are vital to
upcoming pilots. These transactions were first introduced in 8.0 and by
implementing in 8.1 each user can take advantage of the medication history
transactions while gaining the other important benefits of 8.1 and obviously
codes once.

Other information. The industry, pharmacy chains, software vendors, pharmacy
network providers, confirmed at the November NCPDP meeting that they are
unanimously planning coding and implementing 8.1. Their decision was based on
consensus that 8.1 provided the needed enhancements and the pilot guidance for
2006 and beyond. No industry representative stated a plan to implement 5.0 as
there was no business justification, no major enhancements, or pilot guidance
in that version.

For electronic prescribing to continue to advance it is critical that the
industry be allowed to take advantage of enhancements such as 8.1 and not
artificially held back to an outdated level such as 5.0. Since 8.1 has all the
features of 5.0 and more by implementing 8.1 the industry will be meeting and
exceeding any requirements. To require the industry to code to 5.0 and again to
8.1 would be an unsupportable financial burden that would delay the expanded
use of electronic prescribing.

Since the industry is moving from 4.2 to a new version it is recommended
that all move to the one they have analyzed as the level with the needed
business justification, technical improvements, and pilot guidance. 8.1
includes all the benefits and capabilities of 5.0 and more.

If an implementer is newly coding e-prescribing 8.1 is a more efficient
version to implement due to its improved guidance features and incorporation of
all earlier versions like 5.1. By using the guidance to eliminate potential
mistakes the user can more efficiently and accurately implement SCRIPT.

If an implementer is already using SCRIPT versions the move to 8.1 makes the
most sense because it offers the best guidance, technical features, and pilot
requirements. Coding to 8.0 does not provide the best level of guidance as
important additions were made in 8.1.

There are no known industry organizations currently implementing 5.0
functionality only. There is industry consensus that no organization should
adopt the named standard version 5.0 as a starting point because the standards
and the industry have rapidly progressed and they would not reap the benefits
of the 8.1 guidance and will experience confusion or questions that could be
answered in 8.1.

8.1 implementations include all the benefits of 5.0 plus the important
additional features and benefits listed.

Thank you.

MR. REYNOLDS: Okay, thanks to all of you, excellent job. I’m going to start
the questioning and I’m going to start on the last section. Does 8.1 give
anybody, let me start off first, you used the word all liberally throughout the
discussion, does all mean all? Or does it mean all the people that were on the
committee or all the people that are part of NCPDP, or could you, do you feel
comfortable when you say you’re talking about the e-prescribing industry in its
entirety agrees with what you said? You said all, you didn’t say most, you
didn’t almost everybody —

MS. GILBERTSON: Probably 150 in the room that day who all unanimously said
we’re not coding to 5.0, we’ve moved because we don’t have the benefits, so
that’s the basis of all.

MR. REYNOLDS: Okay, but you can see why the question because all is a very
inclusive word as you look at something.

MS. GILBERTSON: I guess they really had a point here.

MR. REYNOLDS: And the next question and I guess it would appear to me this
is, we as a committee recommended this backward compatibility and I think
you’ve actually now called our bluff, so you kind of brought it in. Simon, go
ahead and play off of that —

DR. COHN: I was just going to ask, I mean can you clarify, you used a lot of
words that almost sound like backward compatibility but I just want to clarify,
does this still support the foundation standard function? Being backward
compatibility for those functions, or does it mean almost backward

MS. GILBERTSON: It is backward compatible, my hesitation is only because
it’s very difficult to really explain what that term means. Only because if for
example you introduce a qualifier in a future version that makes it not
backward compatible because that qualifier wasn’t known in an earlier version,
not really, if you don’t use that qualifier you’re still backward compatible.
So that’s the hesitation in the term.

MR. REYNOLDS: And this appears also to be forward compatible.

MS. GILBERTSON: Which is another conundrum, yes, exactly, that’s the
hesitation. But the functionality which is in SCRIPT 8.1 other then medication
history which was added as we’ve established, the functionality is the same, we
haven’t added, modified things like that, we have only clarified. So I guess
the answer to that question is backward compatible yes.

MR. REYNOLDS: Because my question was would this, and again since we’re
working with CMS on a national situation, so if anybody had coded the 5.0,
let’s say there was 152 that should have voted instead of 150. Are there any
two that could be a 5.0 right now that would be put at a disadvantage in trying
to become part of the test for CMS? If they were at 5.0, not at 8.1?

MR. BLAIR: Well, I think that they’re going to adopt whatever version is
appropriate for the test so I don’t think that that would be a factor whether
or not you were chosen.

MR. REYNOLDS: I understand that but I need an answer to my question and then
I’ll agree or disagree.

MS. GILBERTSON: Obviously I cannot speak for every single entity out there

MR. REYNOLDS: No, but I’m saying if somebody is at 5.0 —

MS. GILBERTSON: The entities that were all represented in this large
gathering represented a lot of the trading partners who are working together to
move forward under the MMA and they have said their recommendations to their
trading partners are to code to 8.1. So if we’ve missed anybody it would be
someone who is maybe not working with any trading partners which would be kind
of difficult —

MR. REYNOLDS: That’s fine. Then my last question on this and then I’ll turn
it over to everybody else to ask is, and maybe Karen you can help with this, we
as a committee, unless Simon tells me differently, if you sent this letter to
us, didn’t go to CMS, it went to us —

MS. GILBERTSON: Well it was copied to Maria.

MR. REYNOLDS: Right. As staff to this committee.

MS. GILBERTSON: Right, this was a formal request of NCVHS.

MR. REYNOLDS: So it’s coming here, we have our next full committee in
February. The pilots are going to be awarded in December and people are going
to be moving forward. So I guess, I’d like to make sure we have some discussion
on does, since basically this is answering what we requested in many ways, I
mean what we put in our letter to the HHS and recommended we hoped happened and
hoped that the industry would do it and so on, does our process impede this
recommendation versus us being able to say thanks, you did what we did in the
letter goes then to CMS —

MS. GILBERTSON: There’s a couple of phases to that —

DR. COHN: Lynne, I’ll let you respond and then I a more specific question —

MS. GILBERTSON: A couple of phases to that, from the recommendations that
were given on the open door I guess we called for the RFA, for the
e-prescribing, you have to fulfill the different requirements that are in that
RFA. One of them is medication history, you’re going to do medication history
you’re coding to 8.1, or 8.0 at the very least but people have said we’re
coding to 8.1, we’re using the charts. So to do the pilots you’re already
moving there and because the consortiums have formed they’re supporting 8.1, so
that’s part of it. This is a formal request through the e-prescribing final
rule to request what unfortunately is a version modification process,
regulatory, because the industry will do, the industry wants to move, I mean we
don’t have enforcement, we don’t have things like that related to e-prescribing
but I mean it’s a formal request that instead of 5.0 it get changed to 8.1.

DR. COHN: Because Harry was obviously focusing on the pilot piece, I’m
obviously less concerned about the pilot piece, I’m obviously more concerned
about the foundation standard pieces that go into full implementation on
January 1st. And obviously what I’m looking at is you’re coming
forward with a request on 25 days before a statutory enforcement date for a set
of activities that are not pilots but that are actually things that should be
happening in the industry in 25 days. And you’re telling me everybody is
prepared for this change. Is that correct?

MS. GILBERTSON: That is correct, that is what the guidance was given to me
to report to you, yes.

MR. BLAIR: I think it’s a little broader then that actually, and I don’t
know, maybe Karen can clarify this, but it sounds like we have a number of
issues and they’re interrelated. One is that the MMA did not specify 8.1 but it
did specify medication history which of course is in 8.1, so there’s sort of a
contradiction there. The other pieces are that if I recall correctly and maybe
somebody could get the RFA out, I think the RFA referred to the fact that
they’d be using the foundation standards, so apparently there would be a
contradiction in the RFA —

MS. GILBERTSON: There’s a clarification, Jeff. I think the industry was
expecting to see medication history transactions as foundation standards, hence
they’re reasoning for moving to the version 8s, medication history transactions
in the final rule were set outside the foundation and put into the pilot.

MR. BLAIR: As an initial standard, and maybe that’s the distinction is that
we thought the medication history would be initial standard rather then a
foundation standard but in fact they’ve been rolled into one. It’s been added
to a foundation standard but given a different version. Is that a correct
statement? A foundation standard being NCPDP SCRIPT.

MS. GILBERTSON: If the foundation standard is just NCPDP SCRIPT Standard
Implementation Guide that is correct. The problem is if the foundation standard
is NCPDP SCRIPT Standard Implementation Guide Version 5.0.

MR. BLAIR: Right, so in a sense the industry, the intent, everybody seems to
agree on the intent, it almost seems as if it’s a semantic issue because okay,
so in this case the medication history didn’t come out as a separate
independent initial standard, it was put into NCPDP SCRIPT as a new version.
Well, okay, so there’s a difference there. I guess that if maybe Karen can
address whether that can be accommodated and it affects both the ruling as a
final rule and it affects what will be used in the e-prescribing pilot tests.
But we have the same type of situation with, it looks as if the dosage forms
for the prior authorization, that that is going to be ready but not the other
functions of prior authorization and if my understanding is correct, Laura,
from your testimony, that codified and specialized SIGs will really not be
ready to be included in the pilot tests. Did I state this correctly? Any
corrections on that?

MS. TOPOR: From the SIG perspective what we will have available no later
then March 31st is the guidance documentation for the use of the
standard SIG within SCRIPT. From a version perspective the SIG Task Group
hasn’t sort of dealt with what version of SCRIPT that will end up being because
by default when it’s ready and added to SCRIPT through the formal process of
voting and balloting and adjudication then it will become SCRIPT X.Y or
whatever numbers we’re up to then. There’s arguably in my mind no reason that
people, piloters will not be able to take the guidance documentation that will
be in the SCRIPT vernacular and use SIG with SCRIPT, whichever version,
presumably 8.1 because the reality seems to be that everybody is going to use
8.1, that’s what they’re moving forward with, and then they’ll just attach the
SIG piece to that —

MR. BLAIR: See I guess my question really, you’ve all provided information,
you’ve done as much as you can, you’ve done fantastic work, now the question is
I guess really has to be addressed to both CMS and AHRQ as to whether they
could accommodate this both within the e-prescribing pilot test and in terms of
the final rule and if Maria isn’t here I don’t know who to address the
questions to.

MR. REYNOLDS: Let’s take them one at a time. First is the 8.1 and then we’ll
talk about the SIG and the prior auth because the 8.1 is your basis, Karen I’d
love for you to comment on before we start blending all these subjects I’d love
to hear your comment on what you’re thinking.

DR. STEINDEL: Karen, can I ask you also to address, because I think you’re
probably going to get to this, is why did we see this letter today 25 days
before it’s going into play? And I think there’s a reason in the final reg.

MS. TRUDEL: Well then maybe you can answer that question. I don’t know —

DR. STEINDEL: I don’t have the reg —

MS. TRUDEL: I mean we’re receiving it today because today is the meeting.

DR. STEINDEL: No, I was wondering what juxtaposition with respect to the
final, because it’s going, 5.0 goes into play as Simon pointed out January 1
and —

MR. REYNOLDS: Let’s let Karen answer, what she thinks, she’s heard it, let’s
hear what she has to say.

MS. TRUDEL: Some of this is a sort of semantic issue because there’s I think
a mindset in people’s minds who’ve worked with HIPAA for a long time that you
have a version and then that version goes away and you completely substitute
another version and that version is the only version anybody can have until
that version goes away. And however things are in the external to NCPDP world
the way we’ve learned that it works here is that essentially you can be
compliant with 5.0 even if you’ve implemented 8.1 in just the same way as if
you have Word version X you can produce documents that are Word version X-2.

So I think some of this, we’re still trying to work some of this out legally
with the legal staff but I’m at least pretty convinced that semantically we
can’t require somebody in the pilots to implement version 5.0 and implement the
medication history standard, you can’t do both of them and it’s silly to ask
someone to implement 5.0 and then pull it out and implement 8.1.

So essentially because we didn’t realize that the medication history
standard for instance was going to be included in the SCRIPT and we were kind
of thinking that it was going to travel along separately we created a situation
in the pilots where if we require people to adhere to the foundation standard
as we specified it they wouldn’t be able to accommodate some very important
functionality that we wanted to test.

So I really think that that’s kind of, I think the pilot issue at this point
is moot, I think we’ve all agreed that there’s no way that pilot applicants can
be compliant with piloting the medication history standard without implementing
a higher version of the SCRIPT. Does that make sense to people?

MS. GILBERTSON: And we have said all along that if the piloters obviously
uncover something they need and it’s not in a balloted approved published
standard we’re going to put that in into a draft version so the piloters can
all implement the same way. It might turn out to be 9, 10, whatever the next
version might be but that’s when it’s actually approved and balloted, that has
nothing to do with the needs of the pilots.

And on the other side, I did want to mention that there were a lot of
comments that came in on the NPRM about don’t constrain versions and when this
discussion came up in the different, in the August and the November NCPDP
meetings the industry was adamant, we don’t want to be constricted in a rule by
a version, let the industry continue to move forward, do not allow or do not
let a naming of a standard version constrain and do not have to go through a
long process to get a new version named.

MS. TRUDEL: And then let me follow-up on that because here’s the flip side
of that issue. From a legal and regulatory perspective because these
implementation guides are incorporated by reference in the Federal Register we
couldn’t simply say version 4.2 and higher because it’s very difficult to
incorporate something by reference which means that presumably you can put your
hands on it, it doesn’t yet exist. So we came up with the notion of backwards
compatibility and a more flexible process and a shorter process to address the
needs of the industry to move forward and to also manage the constraints of the
Administrative Procedures Act and the regulatory issues that we were up
against. So here we are with a poster child of okay, this is what we said,
let’s see how it works.

DR. STEINDEL: Yeah, Karen, that’s what I was referring to when I said I
think you’re going to answer what my concern is was that portion of the
regulation that allowed this flexibility to move on to a backward compatible
standard, which is what Simon was trying to ascertain, is 8.1 backward
compatible to 5.0 and the answer was yes. And I believe that the reg and I have
to admit that I’m probably one of the few people in the room that has not
memorized the reg but I believe the requirement was that they come to NCVHS for
some guidance as to whether or not this should be done and I was looking at
this letter as the request for that guidance to initiate the process, the
abbreviated process, that you were referring to.

MS. TRUDEL: I must blush to say that I’m one of the other people who has not
memorized the final rule, I’m not sure that there is a requirement for an NCVHS
review, I’m sure that any guidance that the committee would have to make would
certainly be considered by the Secretary, I’m not sure that it’s required in
the way that the DSMO process adds in that step. But there is a requirement
that the Secretary assess the request, assure that there is not added
functionality that could be a burden, assess that it really is backwards
compatible, and then the official process is to proceed with a federal register

MR. REYNOLDS: What is the necessary timeframe, that was back to my question
so fine, we have a letter, we have a letter, you just said we may not
necessarily be an important part of that step.

MS. TRUDEL: We’ll have to check on that offline.

MR. REYNOLDS: Okay, so Jeff and then Simon.

MR. BLAIR: My question boils down to this and Karen help us through okay
because at this stage the industry has given us as much as they possibly can
before the January 1st deadline, in some cases they’ve given it to
us with a slightly different label but the functionality is there and the
implementation guides are there. And I think that both the industry, NCVHS and
CMS and AHRQ are all really delighted with the progress. So now the question is
two things, if you have flexibility to go forward with having 8.1 used in the
pilot tests, great, so if you needed something from NCVHS please let us know,
maybe we could quickly do some, a letter or something if you need it from us.
Number two is with respect to the fact that on the prior authorizations it’s
going to be the dosing. On that one is there anything that you would need from
NCVHS? And then the third one is the one I’m really worried about is the
codified, the structured and codified SIG because if the balloting isn’t going
to be done until later in 2006 can that be included in the pilot tests or have
we missed the window and what guidance can you give us on all of that and
specifically what can NCVHS do today and tomorrow to enable you to move

MS. TRUDEL: I’m not sure that there’s anything that the committee needs to
do. I think with respect to the first question the version and the medication
history, for the pilots, yes, I’m speaking about the pilots, I think we’re very
clear that we’ve taken the only approach that allows people to do what we
wanted them to do —

MR. BLAIR: So you can accommodate that if we go forward.

MS. TRUDEL: Yeah. In terms of the second one that has to do with the prior
authorization, I’m sorry I missed that part of the presentation so I’ll need to
defer to Tony, or not defer but confer, let me move on to three with respect to
the structure of the codified SIG and my answer to that would be I don’t think
I want to wait until its actually been balloted to test it to see if it works,
I think it’s appropriate to test it before it gets balloted and if it doesn’t
work then let’s fix it before we go through that whole process.

MR. BLAIR: Well given that you’ve given that answer and it does appear as if
most of the folks that are going to be doing the pilot test even though January
1st is the beginning of that process timeframe they’re not going to
really be ready to actually get everybody in place, get the systems in place,
until March, April, May, when you wind up looking at the applications anyway.
So I would submit to you that even though the structured and codified SIG, some
of it may not be quite ready to include until March, April, or May, that that
still isn’t too late to be included in the pilot tests.

MR. SCHUETH: Can I make a clarification about prior auth? I just want to
make one clarification about prior auth, it’s not dosage, Jeff, it’s the PA
attachment, the HL7 PA attachment that is not currently included as part of the
RFA for the pilots, that was the piece that I was talking about.

MR. REYNOLDS: But I think there’s a whole lot of questions, that’s a whole
separate subject —

MR. SCHUETH: But for the record we needed to clarify that and if you want to
go back to that we can but I just needed to —

MR. REYNOLDS: That’s a whole separate subject. Simon?

DR. COHN: I realize that we are having a number of conversation occurring at
the same time around this letter and I guess I just want to try to focus it, I
mean Jeff obviously has been very concerned about the pilot issues and I think
actually obviously what I think Karen has been saying is that CMS has
considerable latitude in terms of conducting the pilots. I’m obviously
personally a lot more concerned about an event that’s going to happen in 25
days when the foundation standards go into regulation.

And Karen I guess, remember I’m not a lawyer either but I have read through
the whole thing, and the question gets to be is what one takes out of the
question and answer parts where it really does, I mean there’s a reason why
this came to NCVHS because we were asked to advise on versioning for MMA. Now
we have to go back and look very specifically at whether or not, what that
exactly means and how we interpret that, and I guess the other piece of course
is is that there is a federal regulatory process that would have to occur very
rapidly to accommodate a 25 day period.

I guess I have to look at you in terms of, I mean this is asking the
government to be exceptionally responsive to meet a January 1st
deadline, I don’t know if you’ve thought about that one way or another or
whether you even see it as an issue. We’re obviously happy to help in any way
we can.

MS. TRUDEL: I would suggest that in terms of what the subcommittee chooses
to do is to assess the request on its merits and make a recommendation.

MR. BLAIR: Simon, could I also refer to that a little bit? One of the things
that I’m kind of concerned about is that it was I believe Congress that
originally in the HIPAA legislation set forth the guidance for when, that every
new version had to go through the NPRM process and I’m concerned about that
from the standpoint that I saw some legislation in Congress right now which is
talking about recognizing new versions and in other words Congress still has
that mindset and until Congress modifies and releases us of that constraint I
don’t know if there’s anything that we could do or HHS can do and I don’t know
how to go about the process of asking them to recognize backward compatibility.

MS. TRUDEL: The HIPAA legislation and the e-prescribing enabling legislation
have very different provisions associated with them and I think it’s also fair
to say that we’ve learned a lot from the e-prescribing process that we didn’t
know when we undertook HIPAA. And I think either Simon or Harry made this point
the last time we had a meeting that is the one process going to inform the
other and I think it’s really important to keep in mind that one of the reasons
that we will be talking about this update and versioning issue tomorrow is that
that’s exactly what we are trying to do. And so while we do have legislative
constraints, while we may have current regulatory constraints, what we’re
trying to do is to figure out the extent to which we can and should and the
mechanism by which we might actually try to bring some of these processes more
into convergence with each other.

MR. BLAIR: Especially since ASC X12N 270/271 is in both —

MS. TRUDEL: Correct.

MR. BLAIR: Both e-prescribing and HIPAA.

MR. REYNOLDS: And also at a time where because medication history was put
into that new version it really creates almost a whole new paradigm.

DR. STEINDEL: Harry, this is more to you and this is an agenda check item, I
think there’s some very serious discussions that are going to be taking place
on both the prior authorization and the codified SIG and I’m looking at the
amount of time we have left and I was wondering if we could move on.

MR. REYNOLDS: I’ve been very cognizant of the agenda and I will say this to
you, I think if we have to cut short some of our committee discussion we could
find the time but the point I want to make is we need to understand clearly as
a committee, right now, because there’s a whole process of executive
committees, there’s a whole process, anything we write, if it is thanks a lot,
has to pretty well go back through our process. So we’ve already tried to
discuss what the government’s process is, now we’ve got our own process, so I’d
at least like to have some guidance because obviously if we are thinking about
rubber stamping, I haven’t thought this through, if we’re thinking about rubber
stamping what we just got to be sent to the Secretary pretty quickly, whether
Simon would take it through the executive committee or something else just to
say we think this is a good thing, then we need to, I’d like to have that done,
I’d like to figure out some way to work on that, have this committee look at it
tomorrow, and move it because the 25 days is still ringing in my ears.

And so the reason I want to resolve this is I can tell you the codified SIG
isn’t going to get fixed today and the discussions we had on the prior
approval, I mean I’ve got like real serious implication questions on that one,
so that’s not going to happen. So I want to make sure we close this item and
that we understand —

DR. STEINDEL: I’d like to address that point and the reason why I brought up
my point was at the AHIC meeting last week they talked about e-prescribing as
being a breakthrough use case and generally speaking what was heard at AHIC is
there’s e-prescribing that’s going to be done today and there probably isn’t
going to be that much more done magically on January 1st. So I don’t
think just because the reg goes into effect on January 1st we’re
under a 25 day deadline to look at whether or not we should write a letter to
the Secretary on version 8.1 change —

MR. REYNOLDS: That’s a different question then whether we’re behind in the
agenda, now that you’ve clarified that I understand.

MR. REYNOLDS: Simon. You’ve kind of heard where we are and what we’re doing,
we’ll kick this up to the big chair.

DR. COHN: Well I guess I was just going to comment, I’m just sort of
thinking about the process here and obviously we’re, I mean talking about
whether and if we should do something in the way of a letter and whether or not
it needs to be an accelerated activity within NCVHS. The one thing that I’m
sort of noticing that I’m lacking here, and I trust Lynne Gilbertson implicitly
as representing NCPDP’s views, and I tend to think that she probably also
represents the industry views, but I’ve just noticed that we haven’t actually
heard anybody from the industry actually commenting on their support of all of
this and I’m just wondering if we might want to take a minute and maybe open
the floor recognizing we do have at least I think some representatives from
around the industry to sort of comment on their readiness to implement, just
because I think that would be a good, I mean Lynne has represented that
virtually everybody is in support of this and everybody recommends it and not
that I don’t in any way disagree but it would make me feel a little better if
others had, if others were supportive of that.

MR. REYNOLDS: Well one way to simplify that process, would any of the
industry leaders that don’t 100 percent agree with Lynne —

— [Laughter.] —

MR. REYNOLDS: That do not 100 percent agree with Lynne and are not moving
forward on 8.1 please stand up. And the mic is open. The point is, I mean we
can do it either way, we can have them say it again but I think she represented
that 150 people in the industry —

MR. SCHUETH: Harry, I think it’s fair to say that everyone here would agree
with the 8.1, if there’s anyone that’s outside the all —

MR. REYNOLDS: You can’t say that for them, that’s what Simon was asking.
Everybody had a chance to stand up, come to the mic, if there’s anybody in this
room from the industry who does not agree with what Lynne said please step to
the microphone and move forward.

MR. DECARLOS: Michael DeCarlo with Blue Cross and Blue Shield Association.
Harry Blue Cross does not have a representative on the NCPDP so I can’t say
that as an association, although some of our plans may have been there and may
have agreed and may have even been on the task force, I can’t say that the
association itself agrees with moving, or having 8.1 stated to be the new
foundation standard if I understand what’s being proposed here.

I can say that one of our comments to CMS with regard to the adoption of the
medication history standard was that there would be serious implementation
problems connected with that and we asked the agency for an extension or
additional time to address those implementation problems so that we could solve
whatever problems would come up incorporating medication history or even NCPDP
SCRIPT into the processing systems of those plans which had either not used the
standards that were being recommended or had developed e-prescribing capability
in house. So we were advocating more time for the industry to become compliant.

In the final rule the agency did come up with a process that would allow it
to assess whether or not a new version was in fact backwards compatible and
indicates that they want to at that point put out a notice saying use of the
new standard would be voluntary but it would not undermine or undercut the use
of the prior standard as people had implemented that. I think that’s a fair and
valid process if in fact they can meet the criteria which they put out in the
final rule with regard to assessing whether the version was backward

So what am I saying? I’m saying we’re not on record as being in favor of
moving to 8.1, I’m not saying that we wouldn’t encourage or allow the
subcommittee to make its criteria determination on the validity of this
request. But I am saying we would have problems if 25 days before January
1st the agency were to come out and say you know what, 5.0 is not
where you ought to be, you actually ought to be at 8.1.

DR. STEINDEL: I have a couple of questions on that. Since the final rule
came out, I don’t know what, it was days ago it seems like, maybe a month or
two ago, and you were talking about people having implementation problems. Well
if they haven’t really started, if they have started implementing 5.0 since the
day the final rule came out and January 1st, or if they’ve already
implemented 5.0, we’ve heard that this new version is backward compatible so it
shouldn’t have an impact on them. If they haven’t started implementing any
e-prescribing version then your comments about the lack of time for
implementation apply to either version.

MR. DECARLOS: That’s a true statement.

DR. STEINDEL: So I don’t know how we should consider your comments.

MR. DECARLOS: The question was was there anyone in the room that wasn’t 100
percent on board with the recommendation to move to 8.1 and that’s what I’m
responding to.

DR. STEINDEL: And you’ve explained your reason why you might not be, okay,
now I understand. Thank you.

MR. REYNOLDS: Karen and then Jeff and then Michael.

MS. TRUDEL: I really want to stress the fact that this whole notion of
backwards compatibility is totally voluntary.

MR. DECARLOS: I realize that, I realize that.

MR. BLAIR: It’s my understanding that unless you, unless you’re going to do
the medication history, I don’t see anything that compels you to move to 8.1,
8.1 the major change is medication history. At this point I think it’s
actually, it is sort of semantics, whether it’s 8.1 or 5.0 the difference is
medication history and in my mind to me the real issue is simply not impacting
the pilot tests because I don’t think that 8.1 or 5.0 really impacts the
industry, it’s just a matter of whether or not they’re going to do medication
history now or not.

MR. REYNOLDS: Karen, could I ask you for a clarification? People are neither
compelled, from what I think I heard you say they are neither compelled nor
prohibited as it relates to 8.1 —

MS. TRUDEL: Correct.

MR. REYNOLDS: That I think is the statement, so regardless, so that says —

MS. SPIRO: My name is Shelly Spiro, I’m representing the long term care
industry, even though the final rules did state that long term care wasn’t
completely addressed we do have an issue with going to 5.1 but it’s the
opposite issue, well no, even 5.0, I’m sorry. 5.0 does not meet what will be
needed for long term care so we have the opposite issue that even though 8.1
might not meet what’s needed for long term care and that’s still the evaluation
that’s being done.

MR. REYNOLDS: The process worked, Simon, whichever one it was, people are

MR. WHITEMORE: Well maybe not because I’m breaking with your protocol a
little bit in that I rose because I want to say on before of Sure Scripts, Ken
Whittemore, we are moving forward with 8.1 as quickly as we can and as I was
sitting back in the back of the audience something that had kind of gotten lost
in my memory but there’s been a fair amount of misinterpretation on behalf of
some of the partners with which we work, both on the physician and the pharmacy
side, that because version 5.0 is mentioned in the final rule some of the
attorneys that work for those organizations have said that that’s what it says
and if you want to move forward with 8.1 you can’t compel us to do so because
that’s what the final rule says. So a little clarity in terms of moving forward
with 8.1 I think would help the industry in that regard.

MR. REYNOLDS: Okay. Michael, you had a comment?

DR. FITZMAURICE: Yes, and my comment is more with the MMA final rule and the
implementation starting the January 1st. Suppose that NCVHS would
recommend that CMS conduct an analysis of the backward compatibility of 8.1 and
inform the Secretary if 8.1 is backward compatible with 5.0, and so if the
Secretary determines that analysis to be accurate and agrees with it then he
could say that those who implement 8.1 are considered in compliance with 5.0.
But you’re not forced to 8.1, you have the choice, you will not be —

MR. REYNOLDS: Compelled nor prohibited, I think that’s a key thing and some
guidance to the industry would probably be good on that.

MS. BYRNE: Terri Byrne, RxHub. One of the things and I concur with what Ken
said about that now misinterpretation of the rule, that part of the issue and I
don’t know that anybody has spoken about this yet is currently the industry is
using 4.2. As part of this process over the last 18 months of making SCRIPT
work for us and all these enhancements that we’ve made to it so that it would
work for e-prescribing we’ve gotten to version 8.1.

And so as an industry we’ve decided to move from 4.2 to 8.1 so we’ve
developed, at RxHub we actually are already in production with 8.1, we’ve
developed mapping from 4.2 to 8.1 and nobody moved to 5.0 or 5.1 or 6.0 or
whatever, they moved directly from 4.2 to 8.1 so nobody is coded for 5.0. And
now the rule came out and some of the lawyers are interpreting it as you have
to use 5.0 so we have some participants coming to us saying RxHub we need you
to code to 5.0, even though we’re already on 8.1 we have to go back to 5.0
because that’s what the rule says.

So I mean I think maybe even just clarification like Karen you said, it’s
also they’re worried that they’re going to have to do 5.0 in the e-prescribing
pilots and we’re saying well 5.0 doesn’t work, we already know that, so why
prove that it doesn’t, let’s start with 8.1 and move forward. So I think that’s
the clarification the industry needs is you don’t have to code to 5.0 because
you haven’t already and you don’t need to in the future because you’re already
using 8.1.

MR. REYNOLDS: Okay, Simon and then Karen.

DR. COHN: Well I first of all want to thank our presenters for helping
reaffirm my faith in Lynne Gilbertson —

— [Laughter.] —

DR. COHN: But having said that I actually sort of liked the wording, where
Michael was going with a letter that might be produced which is once again not
going beyond what we’ve heard and I think it’s really a statement of this
asking, I mean I think we do need to have HHS independently verify that it is
backwards compatible and clearly the issue gets to be is that we’re really, for
December 1st we’re talking about implementation for those foundation
function which I think Lynne sort of commented clearly in the letter of
reference because those were really the key pieces that really go into
implementation on January 1st. Other things are obviously piloting
which I think this would be a great foundation for that but I think those were
the things that we might want to say.

MS. TRUDEL: This is about the first day I’ve left the office without having
my copy of the final rule with me so I thank you whoever’s laptop this is, so I
do actually have the text from the preamble of the final rule and it says
additionally the Secretary will ensure that any newer version that incorporates
significant changes from the prior version undergoes notice and comment
rulemaking before industry compliance is required, however we acknowledge the
need to elicit input from interested parties. Therefore we will ask the NCVHS
to assess new versions of the standards if they are developed, obtain input
from SDOs and other organizations, and provide recommendations to the Secretary
regarding whether the new version should be adopted. We do not anticipate that
the Secretary would waive notice and comment rulemaking in any case where a new
version is not backward compatible with the most recent prior adopted version.

So I think what that says is that it might nice for the committee to draft a

MR. REYNOLDS: And Michael, I hope you did not erase what you started with.

All right, Jeff has got a comment and then we’re going to take a break.

MR. BLAIR: I make this comment with real concern because this is not what I
would like the case to be. I am worried that if we don’t get a release from
Congress that we’d have to go through the whole process and I don’t want to do
that because —

MR. REYNOLDS: Everybody is shaking their head.

DR. COHN: Jeff, there’s actually nothing in the legislation that references
what you’re describing, it has to do with the federal, there’s some sort of —

MS. TRUDEL: The Administrative Procedure Act.

DR. COHN: — Procedures Act, and the actual, the MMA rule as Karen, she just
read the applicable —

MS. TRUDEL: The language in the final rule permits the Secretary to
recognize a new version that can be adopted voluntarily by the industry by
notice, by Federal Register notice, which is a much shorter process then notice
and comment rulemaking, we’re simply modifying the incorporation by reference
to incorporate another implementation guide along with the one that we already
incorporated, as I understand it.

MR. REYNOLDS: Okay, Ross, did you have a comment?

MR. MARTIN: This is Ross Martin with Pfizer, it’s a question about how this
would be stated in any new additions of versions that would be mentioned as in
this process that doesn’t require the NPRM process. If we’re talking about the
functionality, the functional capabilities of the original foundational
standard, and I will also refer to it as a floor standard of the version,
version 5.0, can we refer to as long as these functions are still possible
within the new version that someone can voluntarily adopt it, and as long as
anyone who remains using a floor version of 5.0 can still communicate with
someone using a later version that that shouldn’t be prohibited. If that can be
explicitly clarified in any kind of future language I think that would be very
helpful. Is my comment clear? Did I state that clearly?

MS. TRUDEL: Well first of all I think that’s already pretty implicit or
explicit in the language in the preamble to the final rule, the other thing is
that if we were to do this we would adopt the later version only for the
functionalities that we adopted the earlier version for. So in other words we
would not be adopting the fill status notification for use as an alternate
foundation standard nor would we be skirting the pilot process by adopting the
medication history for use as a foundation standard, it would simply be for the
same functionalities that we adopted 5.0 we’re now adopting 8.1.

MR. MARTIN: Except that you’ve already heard from two different major
network people that have said that lawyers are telling them that their
interpretation of that explicit language is that they can’t move beyond the
named version.

MS. TRUDEL: Okay, all I can tell you is that I’m not an attorney and I can’t
develop at this point frequently asked questions off the top of my head, we
working with our general counsel folks to answer some of these questions and
the other thing that will clarify this process is if there is really good cause
to adopt and incorporate 8.1 by reference. We have two different ways of
approaching the issue.

MR. REYNOLDS: Okay, process check. We’re going to take a break, we have
Kathleen Fyffe scheduled, Laura, are you here for the rest of the day? Lynne?
Tony? Then we will listen to Kathleen and then we may have to forego some, so
we can make sure that we at least touch base briefly on the prior auth and the
codified SIG. Anybody on the committee disagree with that? All right, that will
be our process.

Thank you and we’ll be back at 3:25.

[Brief break.]

MR. REYNOLDS: Okay, it’s 3:25 and we’ve held Kathleen up, our next presenter
is Kathleen Fyffe from the Office of the National Coordinator so we’ll take a
brief respite from moving the world forward on e-prescribing and we welcome
you, so thank you, Kathleen.

Agenda Item: ONC Update – Ms. Fyffe and Mr.

MS. FYFFE: It’s a real pleasure to be here this afternoon with my old
friends and colleagues. My colleague Lemont DuPond, who is in the Office of the
National Coordinator, is going to speak first about the NHIN contracts. We are
on a limited timeframe today, Lemont actually has to get back to a conference
call that he stepped out of so he’ll be here for about 15 minutes and then I’ll
take over after that and tell you about the activities in the Gulf Coast.
However, I have to be out of here by 4:00 to go back to a meeting —

MR. REYNOLDS: We’ll thank you ahead of time.

MR. DUPOND: Thank you, I do apologize, I’m actually due on two consecutive
conference calls to review the comprehensive work plans for the contractors
that won the NHIN architecture prototype so I’m busy in real time working these
contracts as I discuss them.

So some of you may be familiar but for those that are not I want to talk
this afternoon about the NHIN architecture prototype awards. And to give you an
overview, the RFP which was released last June I believe, stated that the
purpose of the contract was to develop and evaluate prototypes of an NHIN
architecture that maximizes the use of existing resources such as the internet
to achieve widespread interoperability among health care software applications,
particularly EHRs. So that was the state purpose of the contract imbedded in
the RFP and that’s publicly available.

And from that we had anticipated a high level of interest, I think we were a
little overwhelmed with the number of responses we received, we received 55
proposals from which we did a review process and the selection that ultimately
arrived at awards, contracts totaling $18.6 million dollars to four consortia.
And the four consortia are as follows, I’ll go alphabetically, this is not in
order of preference.

The first team from Accensure(?) received $5.6 million dollars over 12
months, all the contracts are for a 12 month, one year timeframe. And Accensure
is working in three distinct health care markets. As part of the solicitation
we asked not only for the development of an architecture but a proof of concept
real world prototype that would be conducted in three as we call them health
care markets. So as part of their proposals they had to identify three
geographic regions in which they would take information from systems and share
them across systems in other geographic locations.

So Accensure is working in three states, Kentucky, Tennessee and West
Virginia. Their primary technology partners are Appleon(?), Cisco, Oracle, and
Quavatics(?). They’re an array in second tier providers that are also involved
and that’s information that’s available in the press release on the HHS website
so if anyone needs that let me know and I can find that for you.

The second contract was to CSC and that was for approximately $6.7 million
dollars, and they’re going to be working in health care markets in California,
Indianapolis, and Massachusetts, so those three states. And their primary
partners are Browsersoft(?), the EHR vendor associations, Microsoft,
Regenstrief, Silas Mashers and Sun Microsystems.

The third was IBM and they received a contract for $5.7 million dollars and
they’re going to be working in two health care markets in North Carolina, one
in the Rockingham County and the second one is the Research Triangle Park. And
they’ll also be working in the state of New York into Connick(?) IPA, and their
technology, IBM’s technology partners are Argesie(?), Business Innovation,
Cisco, and Gennium(?) are the primary partners with IBM.

And the finally and fourth contract was awarded to Northrop Grumman and they
received $2.9 million dollars and they’re working in two health care markets in
Ohio, one centered around the University Hospital Health System, the second,
that’s in Cleveland, and the second one is Health Bridge in Cincinnati, and the
third market is in California in Santa Cruz. And they’re going to be working
with Axalottle(?), First Consultant Group and WebMD.

And as I mentioned before these contracts are aggressive, they are for 12
months, the execution date of the contracts was in November so they’re already
up and running and we’re going through the awkward formative stages of
calibrating the activities and working through some of the preliminary aspects
of planning.

And in doing so one of the things that we have echoed not only through this
calibration period but also prior to that during negotiations is that their
charge is to do two things, is to primarily as the title suggests is to deliver
an architecture and to deliver a prototype. What we emphasized to them was a
balance between these twin priorities, the effort is more then simply the
development of an architectural white paper, we could have left it at that and
had yet another series of white papers on the interfaces that would be
developed. But what we wanted to do was to instantiate those types of
architectural white papers in real world prototypes to show the market that is
feasible and doable.

The tension is on the other side a number of these are large systems
integrator so their focus is on the prototype and they’d like to tell you that
the effort is to plan the architectural function requirements, we’ll get you a
prototype and you’ll have an end to end solution for an NHIN architecture. And
that is not the intent of this solicitation so we’ve used the 20/80 percentage
really to emphasize that 80 percent of what we expect is an architecture, 20
percent of what we expect is a prototype that will proof of concept at the end
of the day. Oftentimes being a systems integrator and systems developer they’ll
focus their 80 percent on the development of the particulars of the prototype
so that’s been a learning exercise.

And the analogy we like to use is that these are not four contractors that
are engaged in a horse race where they’re going to scream around the track and
the first one that comes to the finish is going to win a nationwide contract
that will then be used to deploy a Nationwide Health Information Network.
Rather we see it as more in a sense a car race where at various intervals
around the track we’re going to stop the race, we’re going to ask the four
vendors to get out of their cars, take them apart, and not only display their
architecture, their interface, their components to each other, but also to
display them in public forums so everyone that is watching, observing and
participating in this to varying degrees also has an opportunity to understand
what is going what is going on.

At the end of the race what we’re going to do is in a sense have a system, a
list of capabilities and requirements that as Dr. Brailer likes to articulate
will serve as the requirement features for anyone who wants to or envisions
themselves to be a Nationwide Health Information Network service provider. So
just as today you have a cellular phone and you can buy or choose competing
plans ultimately we believe that through an integrated system you will have
electronic health record application or other type application where you can go
out and accept bids from services that are NHIN network service providers in
just very much the same way that you have roaming and you also have nationwide
connectivity through your cellular phone.

So that’s the vision, the hooks that we have in the system or the contract
that are important to consider are that as one of the deliverables we had
insisted on the perpetual license of the software and the components that are
used to develop and deploy the prototype. We also have at the core this idea of
the intellectual property ownership of the architecture falls in the public
domain. The implementation guides and the other aspects that they use to create
and move forward, they can keep that, but in a sense we’re going to have those
interface specifications and other types of aspects available in the public
domain so not only the winners but also the other 40, excuse me, the other 51
proposers that were interested in participating in this will be able to
understand be able to ultimately be part of the portfolio of service providers
in the space.

And we also want this to be an iterative process. As I mentioned we’re still
in the formative stages but what we’d like to do is over the course of the next
year with respect to these particular contracts is have very discreet events
that are built around the promulgation of what we call artifacts. For example
the first example of this is in the development of use cases, the use cases as
defined by the breakthrough for the American health information community, or
the community, are really going to help drive and scope the types of activities
that these and other contractors are going to be involved in.

We deliberately set this up as a participatory exercise so that we can get
as much feedback from bodies like yourself but also entities not only within
our portfolio of contracts but also the entities that we touch with in the
public and private sectors, so that we envision, and the dates and schedules
for this have yet to be pinned down but four or five opportunities throughout
the next year where we will bring together and in a sense have an open display
of these race cars that are going around and so that we all have a sense of
what’s going on. And they’ll be opportunities not only for input into the
systems but they’ll be output from these exercises of public display that will
feed into other processes.

So we’re hopeful and encouraged that by making this a very public process
that we will ensure that the marketplace has a readily available pool of people
that will be energetically and enthusiastically involved in providing these
services in the future.

In terms of the very near term focus and schedule for this particular set of
four contracts the use case development we really want to hone and finish by
late this year and early next year so that the breakthrough process in terms of
teasing them out into more refined use cases would happen sometime early next
year. Also the detailed technical design and architecture would also become
unveiled in bits and pieces through early 2006 and in the summer the
contractors are expected to have not only deployment plan but an operational
plan and revenue and cost models which will be useful to entities outside in
terms of understanding what it takes to provide these services but also from
the health care market perspective what it takes to engage and be part of
procuring this type of service. And then finally by the fall of 2006, November
’06, ’06 is their date for the functional prototype to be developed, to be
tested and evaluated and licenses turned over to us.

So we believe it’s a pretty aggressive timeframe, we’re encouraged by the
progress that we’re making to date and look forward to discussing this and show
what they’re all up to in the coming year.

MR. REYNOLDS: Thank you, we’ll keep you to your quarter of so we’ll allow a
couple questions, is that when you need to be gone, quarter of?

MR. DUPOND: Yeah, that’s correct, I have to go back and help Dr. Lunsk(?)
make sure that we’re —

MR. REYNOLDS: We will hold you on your schedule. Jeffrey?

MR. BLAIR: Thank you, congratulations for the progress that you’ve made so
far. The question I have really derives from an emerging RHIO viewpoint and
since I have to be involved in one of the RHIOs some of the things we’re
struggling with we’re really hoping that these prototypes will give us some
guidance. And we have a sense from looking at the architecture and the
prototypes that it will help us in one area, but there’s four. The one area
appears to be exchanging clinical information via electronic health, access to
electronic health records, which is we’re assuming that’s going to be some form
of a health information exchange.

The second type is already out there to some degree and that is
communications between providers and payers with the HIPAA financial
administrative transactions, all the ASC X12N transactions, NCPDP
telecommunication standards, but we’re a little concerned that we’re going to
wind up having two different types of networks.

A third is telehealth networks that are also out there serving rural areas
but don’t have access to patient records.

And then the fourth is the e-prescribing network, so we’ve got four types of
networks out there and so we’re hoping that these prototypes, matter of fact
even on our own we’re going to try to look at seeing what we can do to start to
integrate these and we sure would hope that these prototypes would look at
taking a look at those four types of networks that are emerging and look at how
did they get integrated.

MR. DUPOND: I think that’s an objective that’s scoped by what’s emerging as
the breakthroughs, in other words it’s part of their charge and a primary
driver for each of these four contractor’s charge is to satisfy the
requirements of the broadly defined use cases, so that what we’ll find is that
health information exchange will be sort of a primary driver of that, number

Number two, with respect to communications for providers and payers how that
teases out and evolves and filters in could be demonstrated in one of the four,
could be demonstrated in two, or maybe in four of the four.

So as each of these don’t have specific prescriptions of what they have to
do to fulfill not only the exchange of their contract but within broad
parameters and the use cases they have opportunities to do some picking and

So what we’re going to try to do is make sure that we have a rounded
portfolio of aspects that will help ensure that the integration doesn’t proceed
in the instantiation of one way single use tracks. Now with respect to
tele-consultation and the like —

MR. BLAIR: Telehealth and telemedicine, yes.

MR. DUPOND: That’s going to be I think a little bit harder to crack and I
don’t think that’s currently within the purview or scope of the contractors per
se, it’s not say that the environment around which the contractors operate is
not thinking about these but at this point I’m not sure within the 12 month
timeframe that we’ve prescribed for this activity, that that’s within scope.

MR. REYNOLDS: When do you foresee the breakthroughs being identified?

MR. DUPOND: We are hopeful that the breakthroughs will be identified,
enumerated, and in a sense turned over to the contractors very soon, within the
next couple of weeks at the outside, so we’re very hopeful that what occurs
will be the Office of the National Coordinator taking what was the articulation
of the breakthroughs from the second community meeting which was a the end of
November and refining them to such a degree that the contractors can then begin
to work through and then define the various elements that would constitute a
use case based on their perspectives.

What we’re going to do at that point is once the contractors are charged
with this exercise of developing the use cases we’re going to bring them back
in and in a sense harmonize them to make them roll up into broad use cases
again that reflects all of the multiple perspectives because as you can imagine
from the systems integrator’s perspective they’re going to want to know
functional requirements. From the HSPE(?) they’re going to want to know which
standards are the ones that they need to focus on in terms of readiness. From
CCHIT(?) they’ve already promulgated use cases and have developed activities
around ambulatory EHR certification.

So each of these contractors bears their own peculiar perspectives that
we’re going to have to, I don’t want to use the word parochial but we’re going
to have to be able to make the tent big enough so that once we turn these back
over to them so that they can proceed that they will see a reflection of some
of their activities but also look and understand the broader perspective of
what their brethren are doing on the NHIN contractors and what they’re doing on
the other activities outside of the scope of our contractual arenas.

So we’re hopeful that the breakthroughs really move forward in the next
couple of weeks.

MR. REYNOLDS: Okay, any other questions? Okay, well you can walk to your
next meeting, you don’t have to run. Thank you very much, excellent job.

MS. FYFFE: A couple of weeks ago Secretary Mike Leavitt announced two
important agreements regarding the Gulf Coast and a digital recovery effort.
One of those involves the Southern Governor’s Association, the other one is a
contract with the State of Louisiana Department of Health and Hospitals. And
I’d like to talk about those two agreements today and in particular tell you
what I can about the Gulf Coast project with the State of Louisiana.

As you were aware in the New Orleans area recent hurricanes, particularly
Hurricane Katrina, very much devastated the paper records and any existing, or
many existing electronic records of providers. There were providers that might
have had electronic records in their offices but they didn’t have a back-up
anywhere. And of course the same held true for both hospitals and for
physician’s offices. The paper records are probably gone forever.

With all that in mind our office was able in the final couple of weeks of
September to very quickly put together a project for the Gulf Coast with the
Department of Health and Hospitals in Louisiana to provide $3.7 million dollars
to help them in digital recovery. And at the same time we signed a Memorandum
of Understanding with the Southern Governor’s Association to provide what we’re
titling a Gulf Coast Health Information Task Force to provide leadership and
coordinate all of the HHS activities that are ongoing in the Gulf Coast.

The Southern Governor’s Association is an arm of the Council of State
Governments and on the board of directors of SGA is the chairperson, Governor
Kathleen Blanco, vice chair Governor Barbour of Mississippi, and they and their
colleagues are very, very interested in whatever can be done to be certain that
when disaster strikes again wherever it might be that there are electronic
health records so that there can be continuity of care for the patients in the
Gulf Coast and elsewhere.

The Gulf Coast Health Information Task Force is being formed as we speak, I
talked to the SGA just today and the task force will consist of approximately
15 to 20 leaders, senior leaders, representing the public sector, the private
sector, local regional folks in the Gulf Coast, as well as some national expert
representation. And the first meeting will likely be at the end of January or
the beginning of February. As that information becomes more available in terms
of who the leadership people will be I’ll certainly let you all know about
that. My sense is that the representatives will include people from Alabama,
Mississippi, Louisiana and Texas because those were the areas most affected by
the recent hurricanes.

As for the State of Louisiana contract, again this is with the Department of
Health and Hospitals in the State of Louisiana and it’s $3.7 million dollars. I
can tell you about the scope of work for that contract, I’ve been advised not
to go into great detail, that comes from our contract’s office. I’d like to say
that the State of Louisiana DHH has subcontracted with some technology
companies as well as consultants, and I’m not able to name who they are but I
will say that they include both nationally recognized organizations and local
organizations within the Gulf Coast to help them with this project because DHH
does not have the capacity, the technical capacity and the manpower to do this
on their own. So they’ve had some ongoing relationships with some organizations
that they’re going to be using as subcontractors for this.

The purpose of the contract is to support the health information needs of
the evacuees from the Gulf Coast and to recover the health information
infrastructure in the affected areas of the Gulf Coast and to develop a
prototype that will include interoperable health records. This project, this
contract, will work to the extent possible in a collaborative fashion with the
NHIN contracts that Lemont talked about as well as with the contract we have
for the compliant certification process, the one for standards harmonization,
and also the contract involving privacy and security solutions for
interoperable health information exchange.

As for the exact statement of work, the project involves providing
leadership to the Gulf Coast Health Information Task Force and also
participation in a Gulf Coast Health Information Organization which is a RHIO
type organization that will be formed. They’re going to be enhancing the
Louisiana area technical capacity including work with vendors, vendor
partnerships who have proven technology that could be used to develop health
information network within Louisiana.

They’re going to be producing detailed technical design documents and
architectures, a very detailed deployment plan, an operational plan for how you
would actually use an interoperable health infrastructure within the Gulf
Coast, and they’re also going to be developing a model to evaluate the costs
and benefits for an interoperable infrastructure in the Gulf Coast.

There are going to be two health care marketplaces emphasized in Louisiana,
both of which are in the general area of New Orleans that were affected, and
the general timeframe is that the development will occur is occurring now,
testing of the prototype will be in early to mid summer with the final
deployment by the end of summer or into September, which is when the contract
ends, that’s September 2006.

It’s a very ambitious contract, the folks down there, and I just spend two
days in the Gulf Coast, are still a bit shell shocked but I will say that this
project provides them with a real purpose and a real way to work very, very
hard to hopefully prevent the devastation of records that occurred from the
hurricanes so that that won’t be as much of a problem in the future.

And at this point I’d be happy to answer whatever questions you have.

DR. FITZMAURICE: Part of your description of the $3.7 million dollar
contract sounds like it’s an NHIN, that there’s a lot of synergy between the
four NHINs that were funded and this one. So are you the project officer on
this one?


DR. FITZMAURICE: So you’ll be working be closely probably with Lemont on the

MS. FYFFE: Correct, yes. The other contracts that are in ONC are as I
mentioned the certification compliance, the state privacy and security, the
NHINs and the standards harmonization, those are all contracts that are ongoing
in our office. And as it’s stated in the Gulf Coast or the Louisiana contract
to the extent possible you may collaborate as much as possible with the other
contracts but of course our primary objective is to build a prototype that’s
going to be working in the Gulf Coast as soon as possible.

DR. FITZMAURICE: So the others get a chance to plan it and do the
architecture, you actually have to do it before those other four contracts will
have —

MS. FYFFE: We don’t feel that we’re in competition with them but we can
certainly learn —

DR. FITZMAURICE: Not a competition but all the information that you might
wish that you had you won’t have before you have to do, it sounds like a very
difficult job.

MS. FYFFE: Yeah, it’s a very ambitious contract.

DR. STEINDEL: Kathleen, what I’m impressed with, yes, it’s a very ambitious
and very worthwhile, what impresses me about both what Lemont was talking about
and what you’re talking about is how you’re focusing on the technical details
of how we’re going to transfer information, etc., but what I’m struck with was
the stories that I’ve heard out of the Gulf Coast region that really the
technical details were part of the problem, the major problem that they had
were the political and the legal details. And to give you a very specific
example, while we’ve talked a lot about the exchange of medication histories,
one of the good success stories that came out of the Gulf Coast region was the
exchange of immunization registry information. And this came about because a
lot of the states suspended their legal restrictions on exchanging information
and this went beyond just the southern states, some of the northern states that
also received evacuees agreed to exchange information, etc. And this is well
and good, but what I just heard the other day from our CDC immunization people
was the Mississippi attorneys said well this was good in the emergency but no
more. So I think you should consider that sort of thing when you’re looking at
this whole exchange milieu.

MR. REYNOLDS: Okay, any other questions? Michael.

DR. FITZMAURICE: Is there any thought being given to trying to provide some
common say electronic health record? I know that CMS is planning to give for a
small fee the Vista system developed by VA, might there be some partnerships
that could be worked out so that everybody starts off with some commonality and
then maybe developing a way to interchange between those who have some other
system and a given base system?

MS. FYFFE: Within the scope of this particular project no, they’re not
looking at the VA system. In my discussions over the past couple of days down
there I do understand that there are some rural physician practices that are
actively looking at the Office Vista system, or Vista Office, but that at this
point is not a formal part of this project.

MR. REYNOLDS: Okay, Kathleen thank you, your task is daunting but we stand
ready to support you any way we can.

MS. FYFFE: Thank you very much.

MR. REYNOLDS: Thank you very much, I’ll let you get back to your next

Agenda Item: E-Prescribing Update Continued

Okay, if we could call our previous panel back. Thanks again, Kathleen, we
really appreciate it.

Okay, at this point we’re not going to go back on the letter or what was
previously said, we’re going to focus on the prior authorization and the
codified SIG —

— [Multiple speakers.] —

Okay, let’s take the easiest one which would be the structured codified SIG
and let’s make sure if there are any questions if anybody has on what we heard
and then we’ll direct our attention to Tony. Laura, you have any preliminary
comments based on everything you’ve heard since you spoke and then we’ll it
open it up to questions.

MS. TOPOR: I’m starting to regret I didn’t book an earlier flight.

I guess what I would reiterate or reinforce from my earlier comments is the
work that we’ve done and the decisions that we’ve made, and we being the task
group, were done within the context of what our objective was to accomplish
which was to find a way to standardize the SIG in time for the 2006 pilots. In
doing that I again have to commend the task group for the work that they’ve
done because I think we’ve come a tremendous way. We made decisions based on
the information and the resources that were available to us through this

The code set conundrum will be there long beyond when the first pilot
transaction goes through incorporating the SNOMED codes and I guess what I want
to make sure is that people understand why we chose to go the SNOMED route and
again, timing, ease of implementation, SNOMED has been at the table with us
since the very beginning. They are able to give us comprehensively codes for
all of the fields that we’ve incorporated into the format and so what we didn’t
want to do was put pilot participants or other users in the position of saying
okay, for route and site I go to SNOMED, for dose unit I go here, I go there,
and so it may not be what the final standards end up looking like or in two or
three years we may be at a point to support the use of multiple vocabularies
within the standard.

But in order to really get it done and have it available that was the
decision and I think what we’ve come up with and part of what I tried to lay
out is the code set conundrum is bigger then the SIG Standard Task Group and we
tried hard, and those who’ve been on the calls can attest to the fact of the
struggle of trying to keep the scope on we need a SIG standard, the code set
issue is out there, it’s bigger then us, and if the committee feels that they
want to charge somebody or ask a group to take that on that’s something that
NCPDP I’m sure, if they want help trying to figure out the entire conundrum
I’ll volunteer Lynne to lead that one, but I think that’s the next step, I
think that’s what we’re looking for is for this committee to say okay now we’ve
got to focus on code sets and who’s going to do it and lead it and drive it so
that it works for all of the different efforts that we’ve got.

MR. REYNOLDS: Okay, I’ve got a question from Jeff, do you have a comment on
that Lynne?

MS. GILBERTSON: One of the things that we put as one of the principles of
the task group is that the code sets and the concepts used within the SIG would
not, would land on one choice. That there would not be multiple choices like
for route, you have a choice of seven different code sets for example,
qualifiers, whatever, mainly because SIG is too important to map. And we wanted
to say can everybody be on the same page that says right, wrong or indifferent
this is the one we’re using for this concept and start with that and clean up
mistakes later and go for that. So that’s been a challenge but that’s been a
premise we’ve lived with.

MR. REYNOLDS: Okay, Jeffrey? It will be Jeff and then myself and then Steve
unless Mike has a question and we don’t want him to have to go after Steve. And
Randy, okay.

MR. BLAIR: Well in this case due to the wisdom of CMS they intentionally did
not specify either the codes or the versions that would be in the structured
and codified SIG so there’s not a limitation or constraint or a barrier. And so
it looks like whatever we could come up with in structured and codified SIGs
can be included in the pilot tests and since most of the folks that are doing
the pilot tests won’t be ready until the March or April or May timeframe to
really start including those things may I ask from either Karen or Laura if
there are constraints that we don’t recognize or can you at least take
advantage of what this task force has been able to develop? Karen?

MS. TRUDEL: I somehow feel you answered the question as you were asking it.

MR. BLAIR: Good.

MR. REYNOLDS: Okay, so the question I have is that if you go to your chart
that you had on SNOMED, HL7, RxNorm and SPL, structured product label, you went
as far, we kind of went through this one other time when there was a discussion
and we had the industry go off and start with something that came out of RxHub
I think it was and then you came back with a standard, I forget exactly which
one it was because everybody was kind of close to done. So you nicely color
coded the first two, so the first one is 100 percent that kind of
synchronization, the second one which is route is 95, and then you stopped. So
as a general bit of information, especially as you’re trying to look at how
different is it, what you’re going forward with, what are the relative
percentages, and I don’t need you to get specific, of the fact of the
synchronization between any of the rest of those as they would relate to 195?

MS. TOPOR: My understanding, and I’m hesitant to quantify it because I don’t
have the level of detail, is that SNOMED and HL7 are generally in sync and
that’s the feedback that I was given, it wasn’t quantified by SNOMED or HL7 to
the extent where they said dose, and I should clarify based on a conversation
with Randy, dose form versus dose unit. They were very comfortable saying for
dose unit, dose form, HL7 is using SNOMED. For route again they were
comfortable with the 95 percent based upon the distinction with the phase

I couldn’t get a number for the others, I don’t know if it’s 80 percent or
90 percent, in my instinct is that it is in that range —

MR. REYNOLDS: But it’s higher then 50.

MS. TOPOR: That is my understanding and expectation.

MR. REYNOLDS: Now what does the N/A mean up there, not available, no

MS. TOPOR: For strength —

MR. REYNOLDS: Let’s go with strength because obviously if SNOMED is selected
then you got an N/A under strength —

MS. TOPOR: Which is not an applicable segment within SIG, so it goes back
to, if we go all the way back to some of the earlier discussions we had when we
started this the task group made a distinct decision to pull out anything
related to the product such as strength because that was dealt with within
SCRIPT in another place so strength would fall there. My understanding based on
the research I was able to do is that RxNorm and SPL did not have codes
available for site, frequency, interval, I think I’m wrong on vehicle though
there’s a little discussion about interpretation, nor administration timing.

MR. REYNOLDS: So there may be differences but no holes.

MS. TOPOR: I hope not.

MR. REYNOLDS: Don’t let me put words in your mouth but that’s what I think I
heard. Okay, Steve and then, wait a minute, he’ll go behind you this time,
we’ll see if that works.

DR. STEINDEL: Actually from what I recall your order, Harry, it was Steve,
Randy, because I think that would be the best way, because I’m going to kind of
lead into some leading questions that Randy might be able to touch on as well.
And if you recall during the presentation, and I have to say this is not just
you, and Karen knows this, I’m getting very sensitive to people blindly saying
this is a CHI standard because the CHI standards are specific terminologies for
specific uses and using it outside of that context is not really appropriate.

And the CHI standard for medications did not mention SNOMED and the CHI
standard for medications was actually tailored around the NCVHS recommendation
for medications. And that focused primarily on what the FDA was using in that
area which has now manifested itself in structured product labeling.

And at the time NCVHS held those various hearings what we were told by FDA
and by the gentlemen who is sitting here in particular, basically that SPL had
terminologies for this and that they were planning to make them public in the
earliest possible fashion. Those terminologies became public I believe on
October 31st of this year, available through several locations, one
of which is the NCI thesaurus. So the FDA terminologies that are used for SPL
are now publicly exposed and this is the true CHI recommendations, the true
NCVHS recommendation.

So what I would urge you to do is to work with the FDA and SPL and the
existing recommendations that come forward from NCVHS on looking at how you can
resolve what we are recommending, and I sit on both groups, for the medication
standards. And what you’re recommending for SPL because I suspect that what
Lynne is saying is probably very correct, we’d like to point them to one
source. So there may be a way to adjudicate that by getting some people to make
certain compromises and I think we have a good history through the NCPDP of
getting various organizations to come together and make compromises and
developing a standard in an expeditious fashion that meets everybody’s need.

So if I segued way in to Randy who might want to pick up on what I’ve
commented on.

MR. REYNOLDS: Mike, we’re going to change the order I think since Steve
leaned over and looked at right at Randy, we need to let him in. So go ahead,

MR. LEVIN: I like what Steve was saying to work with you on this further
because structured product labeling has, now I understand your issue about
you’re working with the organization of the SIG and the terminology is the hard
part. The same thing with structured product labeling, it’s a lot of
organization of information and then codes based on these different pieces. In
the structured product labeling we do have a model for describing how the drug
should be provided, how it should be dosed, so there is a model in there, we
have not fully used that model but it would be something when Steve was talking
about something we can try to coordinate with to help in the SIG and if we
could do something with the structured product labeling, so that’s something we
could look at.

The other thing is that we talked about the dose form, you might want to use
something different then dose form for that term. It’s a unit, the way you’re
using it is a unit of measure which we have in the structured product labeling
as well but it’s different then the dose form as this is an extended release
tablet versus a tablet, that’s what I would consider a dose form and it was a
little confusing for me though in your use case I understand exactly what
you’re using and we have that as well, we would describe the quantity, there
are 30 tables in a bottle so we describe tablets in a very general way as you
describe, we don’t say 30 extended release tablets, we just say 30 tablets, and
when we get to units. And we have a standard for units as well and of course we
have a standard for route of administration, all these standards are available
and they’ve been available actually for a long time, we’ve been using them for
drug listing for a number of years.

And then the other piece was indication, we’re still looking into how we can
use indication because we need certain, we can’t have certain restrictions on
the way we use indication because we’re working in an international community
with these processes and while the SNOMED license might cover U.S. entities
we’re working with the Europeans and the Japanese and the Canadians and others
on harmonizing with all these terminologies. We need to be cautious about using
SNOMED if it’s going to be a license issue so we have to negotiate that.

The other issue was vehicle, you brought up briefly that vehicle is many
times a product, that also we have codes for including whether it’s a food
product or a drug product, so it’s something that we would like to work with
you on that as well. And I think goes back to what Steve was saying that I
understand for the pilot you want to get something, you really want to test the
SIG and the codes were sort of maybe secondary, some —

MS. TOPOR: I don’t know that I would say secondary, I think the nuances
become critical from the prescriber perspective and if we use indication as an
example indication, let’s just say for Ambien, the indication is you’re using
it to treat insomnia but the prescriber’s communication to the patient might be
to take it for sleep and so that’s where, I know, we had a lot of fun on some
of these calls, that’s where again the challenge is out there for all of us to
say okay let’s try and test this in the pilots right now with the information
we had available and with some of the SPL not being fully available until just
five weeks ago, that’s why we picked SNOMED. I think going forward absolutely
there needs to be that coordination with this and again it comes back to who’s
going to own that overall coordination and I guess that’s the question that I
would put back to the group.

MR. REYNOLDS: Karen, do you have a comment on this? And then Michael I’ll
get you next.

MS. TRUDEL: I do, I think it’s very important for everybody to not only to
work together to coordinate this as much as possible but to remember that we’re
talking about one use case, whereas what you’re talking about is the structured
product label that has just a myriad of uses, so I think we need to consider
this to be a use case and to move forward accordingly.

MR. LEVIN: And just to bring up something which has gone on follow-up what
Karen just said is that the structured product labeling has a lot of potential
to be used in a variety of use cases and we should look at that when you
brought up before about using, in GELLO and putting some other pieces of
information there, there’s a lot of potential there so we should work on those
types of things and again it’s another call for coordination and working
together on this.

MR. REYNOLDS: Okay, Michael, and unless anybody has any burning issue this
will be the last question.

DR. FITZMAURICE: A couple of questions about Laura’s slides, slide number,
it’s on page four, next steps. You talk about pilot timeline, were you talking
about the electronic prescribing pilots or are you talking about a separate
pilot? I wasn’t clear about whether it’s another set of pilots, which would be
fine with me.

MS. TOPOR: No, it is the e-prescribing pilots, which goes back to the
question of testing it before it’s balloted and approved.

DR. FITZMAURICE: Next question, on the slide NCPDP code set conundrum, it
came to me that a lot of these things might be imbedded in XML and have their
own tags or own labels, a lot of the information that would be in the patient
SIG. Is there anybody who’s coordinating how you tag, how you name, how you
label, how you name the labels, name the variables in the XML? Is this an HL7
job? Is it, probably it’s a question for all of us to consider, there’s no
national dictionary of XML tags, should there be and who should be in control
of it? I’ll put that as a second question along with who should run the code

MS. TOPOR: And I’ll definitely say that that was not a topic that we

MR. REYNOLDS: Randy, did you have a comment on Michael’s question?

MR. LEVIN: Yeah, I know that in HL7 they’re working a lot on the tag from
the XML and we’re involved a lot with that, with the SPL, because of how to
use, we want to include dosing in the SPL which we did and so a lot of the
tags, if that’s what you’re getting at, would be derived from HL7 work. They’re
in the SPL standard.

DR. FITZMAURICE: But I would also expect some X12 tags and other tags,
there’s a need for national coordination of this.

Next question. The slide you showed with the SNOMED, HL7, RxNorm, SPL, and
all the checks, that was an extremely valuable slide, it shows your analysis of
how these things compare, of all the slides I think that one probably took
maybe the most work but it’s an amazingly good slide for displaying

And one last statement, AHRQ supported two of these workgroups, very little
money, we commissioned a couple of reports, we funded travel of leadership to
be at the meetings, and I’m reminded of what John Eisenberg said several years
ago that chlorophyll is not the only green catalyst —

— [Laughter.] —

— you have people willing to work and sometimes all it takes is a little
bit of oil on the wheels and the squeaky wheels were Lynne Gilbertson, Tony
Schueth and Ross Martin, they let me know the needs and they convinced me, but
the work of their groups, the group that Tony and Ross formed, the Data
Decision Support Prior Authorization Workgroup, the Prior Authorization
Workflow to Standards Task Group members, and the Standards SIG Industry Task
Group members, their work can’t be over complimented, I mean they’re not
sitting around the table but I’ve sat in on some of their meetings, they work
hard, they bring things to each meeting, these people keep the meetings going
and keep pushing them, those people do a lot of good work. They’ve really done
a service to the nation, I want to congratulate them.

MR. REYNOLDS: I think the full committee would second that and also Michael
I’d like to, I think we would thank AHRQ very much because obviously in all of
our discussions it was mentioned that some money needed to come forward and
thank you for being the ones that stepped up to supply it, I think that’s key.

Lynne, you have a comment?

MS. GILBERTSON: I just have a question for Randy, during one of our
conference calls you mentioned, we were talking about like route and trying to
get down to common language, English speaking, you mentioned it has a
perspective of the manufacturer’s perspective. Does SPL, can it grow to be more
or does it have code sets that intend to be more then just that perspective? Or
is it really constrained and we need something above it that encompasses that
perspective as well as the prescriber’s perspective? And is there something
like that going on or is there five arms and legs and nobody meeting?

MR. LEVIN: The SPL itself is meant to be related to the manufacturer and the
FDA together coming up with this and representing the package insert and the
prescribing information, the product information. Whether things can be built
upon that, the standard, things that can be built upon that standard and
incorporated into or attached to or related to that, what the manufacturer and
the FDA come out with, that could happen. But the part that, we have a piece of
that that we need to fulfill a regulatory issue to make sure everyone knows how
the recommendations for using the medication and the adverse events and the
interactions, if someone after the fact puts additions to that or adds to that
using that same structure, that same SPL standard, that’s something that is a

MR. REYNOLDS: Okay, last question, comment from Stan please and then we’ll
move on to the next.

DR. HUFF: This is just a comment and I would point out that because I
co-chair in HL7 I have a potential conflict of interest but I don’t think this
is out of line, just more informational. We worked, and people from Randy’s
group have worked with HL7 to make the set of things that are dose forms in HL7
be a combination of the manufactured dose forms plus the other dose forms that
you need when you’re writing SIGs and other things. And so the whole idea is
that whatever dose forms you need they would come from a common pool of
representations and the FDA could say but these are the only ones that we want
to use in SPL and somebody else who worried about prescriptions or SIGs would
say these are the things, the subset of those that you could use, but they
would be drawn from a common set of concepts.

Now that is from a few years ago when we were doing serious work on that and
I don’t know if that ever got folded into the current discussions that were
going on or not, and it doesn’t have to live at HL7, it could live anywhere,
but I think that’s the strategy you want to work is having a common set from
which you could specify subsets for use in a particular domain but they’re
non-overlapping and it comes from a consistent set of codes that make the whole
space consistent.

MS. GILBERTSON: Are those loaded into UMLS or anything that you’re aware of?
We understand what you’re saying but we don’t know who to go to include,
envelope, or whatever.

DR. HUFF: Those are available in the Version 3 ballot package and in Version
3 terminology database and those are either in or being loaded into the

MR. REYNOLDS: Okay, let’s move on to attachments, or excuse me, that’s my
issue, let’s move on to prior approval. And let’s do this, let’s time box this
for the next 20 minutes, so 4:45, then we’ll move into committee discussion and
try to adjourn by 5:30. So Tony my question is, well first I thought what you
put together was really exciting actually, then I got to the discussion about
275. And the reason that that threw me is obviously everything we’ve been
talking about was somewhat of a fast track and somewhat of a moving from here
to there briskly. And knowing right now that the NPRM is on the street for
attachments, the 275 is not really an approved and accepted format. Seeing it
up there as the basis for the recommendation kind of threw me and made me
realize maybe I don’t understand exactly how you’re approaching it. So if you
could help me with that, it’s not a challenge that’s good or bad, I was
following along nicely and then boom, that jumped out of the middle of the
slide which literally is in the middle of the slide —

MR. BLAIR: Harry, could I just clarify? What Harry is saying, in the request
for applications it indicated that the pilot, the e-prescribing pilot tests
would be testing the ASC X12N 278 —

MR. SCHUETH: That’s correct.

MR. BLAIR: Not the 275, it didn’t mention 275, so that’s what Harry is
trying to determine —

MR. REYNOLDS: Well that’s part of what I’m communicating but what I’m also
communicating is I know that the 275 is being dealt with right now through an
entirely different NPRM, so I know that’s being answered. But I also have seen
differing things including something I believe came out from either HHS or CMS
that talked about that could even be delayed until 2008, as far as a document
that came out, I can get you the document if you want, it was a one liner thing
that came out either from HHS or CMS because it came across, it was hand
delivered to my desk. But the point is, so help me understand how this all

DR. STEINDEL: Harry, could I add just a very quick addition to Harry because
it’s right in line, not only is what Harry was saying that the 275 is not in
use yet and may be delayed but the very types of claims attachment you’re
thinking, or equivalent that you’re thinking of putting into it is not even on
the radar screen.

MR. REYNOLDS: It’s not one of the six.

MR. SCHUETH: First of all, let’s step back, let’s take a half of step back.
The 278 Version 4010, the HIPAA named prior authorization standard, is for
services and procedures, it has very limited drug related fields. They had
developed work arounds for that but the work arounds didn’t really accomplish
prior authorization to the degree we realized it needed to be after we did our
analysis of these forms, these 350 forms that we gathered from the industry. So
what we decided clearly there needed to be a modification to the 278. The X12
workgroup that worked on the 278 separately made, obviously they modified the
278 to address these work arounds and they made a decision to defer the 275
plus the PA attachment but I don’t want to put that all on them, I mean clearly
the NCPDP Task Group felt like because of the volume of prior authorization
information that was being transmitted between the provider and the payer that
there was, it would be exceptionally challenging to modify the 278 to be able
to accommodate the transmittal of that kind of information.

Now what you’re talking about as far as where the 275 and HL7 claims
attachments exist, I mean I think that’s relatively new news. I mean we’ve been
working on the 275 and I’ve testified about the 275 and the PA attachment in my
other two testimonies, or at least my other one for sure, and I think Lynne
might have mentioned it even in a prior one then that, so we’ve been working on
this for a while. The idea when we first started talking about it was could we
use other components of the claims attachment instead of having a separate PA
attachment and the answer was no, let’s put it all into one attachment.

Furthermore, the other thing I would mention and I’m going to ask for
Lynne’s help on this a little bit on this too, the other thing is as a
workgroup or a task group we did draft a letter and comments in response to the
NPRM for the 275 plus claims attachment saying that we supported that for this
reason and this was a use case that we felt like could be accommodated for

Now I know that you’re also wondering about the pilot component of this but
let me, Lynne, is there anything you want to add before I get to the pilot

Okay, as far as the pilots are concerned from our perspective it’s a pilot,
meaning we don’t have to have a balloted standard, what we’re going to do is
just like, and I’m sorry, I forget the organization that just concluded pilot
testing the 275 plus the claims attachment, Empire, they didn’t have to have a
balloted standard to do a pilot, we’re not suggesting that the 275 and the PA
attachment is balloted. My understanding is that the 275, we’re still working
on that but it’s to the point where it could be pilot tested. It wasn’t a part
of my testimony that the 275 was balloted, I said it could be pilot tested.
Same is true of the PA attachment. After the January meeting where the
workgroup gets together and they go through, we’ve drafted the PA attachment,
the workgroup needs to go through that and it needs to go out for public
comment. I’m not suggesting that the HL7 PA attachment would be balloted by the
time of the pilot, I’m suggesting that it could be pilot tested.

MR. REYNOLDS: No, no, I wasn’t challenging, some of us are reading it, we’re
reading there’s six attachments identified and the NPRM has gone on and now
this is, Karen, why don’t you comment —

MS. TRUDEL: The HIPAA standard is for a claims attachment, it’s an
attachment for information that’s necessary to adjudicate a claim so it’s
basically an attachment in response to an 837 having been submitted. This is
essentially an attachment to a 278. So the transaction is completely different
from a HIPAA perspective.

MR. REYNOLDS: That’s why I’m asking the clarification because I don’t
understand, we trust what you guys say, I’m trying to put it in context.

MS. TRUDEL: Basically you may use for instance a 275 for a number of
different things and there’s a specific implementation guide for each one of
them. The definition of a claims attachment is information that’s needed to
adjudicate the claim, the definition of a prior authorization attachment would
be something that is needed to fulfill a prior authorization request —

MR. REYNOLDS: Or e-prescribing.

MS. TRUDEL: — which is a different transaction.

MR. REYNOLDS: No, no, no, that’s great, that’s the clarification I needed,
how was all this rulemaking going on and then all of a sudden here’s another
one and how does it relate and are we having a collision back to the earlier

DR. STEINDEL: Well actually we got a very good answer from Karen that clears
up all of the train wreck issues but actually my comments really don’t have to
do with the regulatory or legal train wrecks that are potentially a problem,
it’s actually getting people focused to try and pilot and develop against this
particular prior authorization pseudo claims attachment type transaction. And
the payers who are the people that you’re going to be working with on this,
that group is focused on the HIPAA claims attachment and the six areas there
and we had a great deal of difficulty even getting a limited pilot study on the
HIPAA claims attachment from Empire. And if they’re going to be focusing on
implementation, etc., of the HIPAA claims attachment how do you think you’re
going to be able to find the bandwidth with the same people to work on
something that is really very unproven.

We’re dealing with several issues, we’re dealing with the transmission of
basic medical information in a claims attachment type format that I don’t know
what the comments are going to come back on for the clinical and the laboratory
responses yet, so we don’t know how that flies even in the HIPAA arena and yet
you’re asking for more detailed types of mediation information I assume with
prior authorization. You’re also asking to use GELLO —

MR. SCHUETH: Wait a minute, let’s separate them.

DR. STEINDEL: GELLO is associated with this process and so since I have the
floor let me run them together. The problem here with that is GELLO itself yes
it is an ANSI standard but to my knowledge it’s not implemented anywhere, it’s
not implemented by the payers, it’s not implemented by the receivers, and so
there is a tremendous number of unknowns in your proposal that I don’t even
think they’re still in the research stage, they’re not even in the stage of
ready to be piloted. So this is what concerns me.

MR. SCHUETH: So my response to that I’m going to separate GELLO from the PA
claims attachment. The PA claims attachment or the PA attachment is you’re
right not quite ready but my testimony was that it would be ready by January,
so that’s what the workgroup is telling me, it’s in draft form, the workgroup
is going to roll of their sleeves and work on it in January at the HL7 meeting.

The other part of your comment on that was how do you find somebody to pilot
test that. Well I’ll tell you that a large PBM, I was a part of a coalition
response to the CMS ERX pilot and a large PBM was willing to step up to that.
Now they scoped out the cost of that and as I earlier testified the cost was
going to be over, if you added that to all the other things that we were asked
to do adding that on top of it was going to be over $2 million dollars. The
exact cost of that is available and known to AHRQ and CMS, I’m not at liberty
to testify what that is today for confidentiality reasons so I signed a
non-disclosure document, I can’t testify, but that was submitted as part of the
application, so John White, Maria, others, know what that dollar amount was.

So my answer to you is people will step up, Empire stepped up to pilot
testing the claims attachment, somebody will step up. Now is the industry going
to jump in and build this? No, but heck, we haven’t pilot tested this yet. So
you’re right there’s a lot of unknown, if I’m a CIO of Empire, Blue Cross Blue
Shield of North Carolina, I wouldn’t do this but I might be willing to pilot
test it if I can get my cost covered, I might even be able to not have to have
my full cost covered, I might be willing to kick in a piece of it myself
because it’s something that I’m interested in.

And a third thing before I get to GELLO, Stephen, is that what I presented
wasn’t a proposal, it was an update, here’s where we are. And what I was asking
for was help on, and I’m hoping that we might be able to include the PA
attachment as part of the CMS pilots but that’s just a hope, I mean I’ve
presented this to John, to Dr. White, I sent an email to him on this, the task
group has seen it, it explained the deficiencies in the 278, I explained the
notion of the 275 and the PA attachment. I attached this document where we
priced out what it was going to cost and his response was that he would get to
that after they picked who the pilot people were going to be. Not promising
that it would be part of the pilot, I’m not suggesting that and I don’t want to
testify that, but he said right now he was focused on picking who the pilots
were going to be and then he’d address that.

What I separately was told by someone else at CMS was that it doesn’t have
to, just because there are CMS ERX pilots doesn’t mean that those are the only
pilots that will be out there, meaning that it might be another pilot. So
that’s, am I being clear? That’s sort of the three things.

Now relative to GELLO you’re right, nobody is doing anything. Now what our
small little ad hoc task group did was we wanted to do a validation to see if
GELLO could be used for prior authorization and our findings were that it could
be. Now we’re a long ways away from pilot testing that, not long, but we’re
much further back from pilot testing that then we would be of the HL7 PA
attachment. I mentioned a number of things that would need to be done before it
could be pilot tested and those things are going to have be something that we
might have to have funded as well.

MR. REYNOLDS: Karen, you had a comment?

MS. TRUDEL: Yeah I do and I’d like to kind of hark back to a concept that we
discussed during the hearings that we had over that one summer on the original
set of recommendations. And that was that I think the subcommittee came to the
conclusion after a couple of hearings that there was no way that we could bite
all this off, all off at one time, and that even the pilots couldn’t pilot test
everything and that there was a need to keep in mind the need for a longer term
what we were calling a research agenda if you remember. And that there were
things that we weren’t going to be able to tackle in the first round or the
second round and who knows how many rounds there are going to be. And so I
think that fits in with what Tony was saying, having missed the boat so to
speak on the CMS, the MMA required pilot, does it take something off the
research agenda, and that I think the subcommittee could be very valuable in
trying to point the industry towards areas where there is promise, whether
that’s the 275 and the PA attachment or GELLO or whatever, and I think that’s a
really good thing to consider in future phases.

DR. STEINDEL: Thank you very much, Karen, I think that’s the direction that
I would like to see the committee recommend we go with.

MR. REYNOLDS: And I think that’s been helpful, we obviously had everybody
stand up, we had everybody stand up and agree or disagree with Lynne, and then
we got excited about how we helped moved Laura’s along, and then we were
confused initially on what Tony was talking about and then I think we kind of,
I was sensing around the table too fast but the research word and the ability
to approach it that way puts it back into an environment where a lot more needs
to be heard and a lot more, if we ballot it the same way, everybody stand up in
the back that is ready to go with this there may not be as many people stood up
or not stand up as previously. And not good or bad, I mean because the point is
if we’re making recommendations we’ve got to understand the size or complexity
of the recommendation and the timing of the recommendation, not just whether or
not we understand or don’t understand. Ross?

MR. MARTIN: This is Ross Martin with Pfizer, if I could just make a comment.
I think the scope that we’ve talked about here are just to the points that have
just been made go even farther beyond these initial short term foreseeables
that we’ve described here because we’re also talking about things like the
discussion of how do you imbed this into the structured product label, well
that would be an enhancement of the SPL. We’ve been talking about how do you
build this into what would most likely be the formulary and benefit standard as
the vehicle, kind of like we’ve taken the structured codified SIG or the
medication history and stuff it into SCRIPT, we would do the same thing with
this to stuff it into the formulary and benefit standard. Those are all things
that, these are long term projects and so we’ve been trying to map it out as a
vision for where this can go. And we have to kind of establish that the vision
is valid and that this makes some sense. If we were doing this from scratch
with no legacy of required standards we’d build this completely differently
because we wouldn’t have to worry about the named HIPAA standards and those
sorts of things. Thank you.

MR. REYNOLDS: No, that’s great, that’s very helpful, and I guess to kind of
close this segment, if I went back to Laura’s graphic that she had with the
child playing with the blocks I would maybe view a little different what NCPDP
and you guys as an industry have done, I mean you have leaped tall buildings in
a single bound and I believe that you’re actually now based on Tony creating
buildings we can’t quite visualize yet. But seriously, I commend, I think
everybody on the committee commends because this is, you guys are the poster
child for the right way to make an industry move and that’s pretty exciting, so
working with us and CMS, this has been a pretty rewarding event to see what you
guys have pulled off so thank you very much and I think that’s, I’m echoing
that for the whole committee. So thank you very much, you can stand down now.

— [Laughter.] —

MR. SCHUETH: I just want to thank you and Michael as well for your comments
now and before and say that I’m going to take that back to the task group
because they’re the people, I got to stand up here and take some of the credit
but they’re doing all the work, they’re great people. And I wish I could name
them all but I’d forget somebody.

Agenda Item: Subcommittee Discussion – Mr.

MR. REYNOLDS: So let’s move into subcommittee discussion and I want to try
to quickly touch on four items. First, I’ve asked Michael to read what he had
put together as a base structure for a letter and he has agreed that tomorrow
morning he will actually have a draft for us, so he will read kind of that,
we’ll go through that and see if anybody, make sure we kind of agree with the
substance and then we’ll let him do his magic with that as he does with words.

DR. FITZMAURICE: What I’ll have is a draft paragraph that states out what I
believe the recommendation would be and let you then have at it.

MR. REYNOLDS: That would fine. So why don’t you read what you kind of, yeah,
just so we get a sense.

DR. FITZMAURICE: It will say something like NCVHS recommends that CMS
conduct an analysis of the backward compatibility of 8.1 with 5.0 and advise
the Secretary. If the Secretary determines 8.1 to be backwards compatible with
5.0 we recommend that those implement 8.1 be considered to be in compliance,
that is neither compelled nor prohibited from using it.

DR. STEINDEL: Harry, my comment on that is if we recall what Karen read to
us from the preamble of the reg we are supposed to recommend to the Secretary
that it’s backward compatible and not that CMS recommends to the Secretary that
it’s backward compatible.

MS. TRUDEL: Yeah, I think it’s, I’m sensing a reluctance to do that based on
one venue but I think it’s pretty reasonable to tell the Secretary that the
committee heard testimony on it and that it certainly appears based on what we
were told that backwards compatibility appears to exist and based on input from
the responsible SDO, based on industry knowledge, this appears to be the thing
to do. I don’t think there’s a necessity to say absolutely we’ve talked to
everybody we can think of, I mean there are some things that you could say to
inform the process.

DR. COHN: I was actually just going to agree with Karen and I think we’re
sort of talking about potentially, I mean both the concept that we heard that
it was backward compatible, we also heard from the industry in testimony that
they concurred in testimony that it was backward compatible. We’re probably
asking CMS to further confirm that backward compatibility and if so we
recommend X, Y, and Z. Does that make sense? HHS, okay.

MR. BLAIR: My comment was going to be very similar to Simon’s only I’d
probably reverse the sequence and I’d have the letter indicate that based on
the testimony that we heard we would recommend this pending verification by CMS
of the, of HHS, of the backward compatibility.

DR. WARREN: I would think that it’s not up to CMS or the committee to do the
analysis to see if they’re backward compatible, I would ask NCPDP to say what’s
the differences between the two.

MR. BLAIR: They did, so we’re just asking HHS to verify, we’re recommending
that we go forward pending verification that CMS —

DR. WARREN: But what more verification do we need? Do we need written
document from NCPDP to say that? The verbal testimony we heard here? I mean if
we’re recommending to the Secretary then we need to do the verification not

MR. REYNOLDS: Karen, were you going to give us some guidance?

MS. TRUDEL: Yes, I was. It’s always the Secretary’s decision, the Secretary
can always decide whether he agrees or not, so I think the main point to make
is what Jeff said and that’s that, and what Judy said also, and that’s that
we’ve received information that would seem to indicate to us that backwards
compatibility exists and that we’ve heard also from the industry and from the
SDO and I mean at that point I think the recommendation is that the Secretary
needs to look at what the committee is telling him and make a determination as
to whether he wishes to move forward.

MR. REYNOLDS: And is there, we also heard a bit of concern that people
aren’t going to 5.0 so is there anything we need to say —

DR. COHN: To 8.1.

MR. REYNOLDS: No, what Terri Byrne was saying is that people would have to
go backwards to go to 5.0 and right now the regs —

MS. TRUDEL: I think that’s, I don’t want to say it’s irrelevant but it’s not
necessarily germane to the steps that we put into place in the regulation.

MR. REYNOLDS: That’s great, just since we heard it I wanted to at least
bring it up.

DR. STEINDEL: Harry, I was going to actually talk to that point and I
thought I heard it slightly differently. What I actually heard was 5.0 is
essentially not an implemented version, that 4.2 is the implemented version at
this time, 5.0 was written as a version that was intended to replace it but it
was overtaken by events with MMA and by the time 5.0 was complete they actually
had 8.0 around and then 8.1 and I think we should mention that.

MR. REYNOLDS: Said much better then I did. Stan?

DR. HUFF: I think to address Judy’s question of what is verification mean,
what I interpret it as is what we’re saying is we heard the testimony from
NCPDP, we think that’s credible, we think it’s accurate, I think all we’re
saying is that somebody who is not NCPDP should go through the standard and
just look and say yeah, all of the things that they said are in there, the
things that are not there, there are not additional big things in there that
they didn’t mention, and we’re done.

MR. REYNOLDS: Or are we saying that we’re recommending that 8.1 be allowed
in the pilots?

DR. STEINDEL: That doesn’t need to be mentioned.

MR. BLAIR: I think that the wording should go along the lines of based on
the testimony we received it appears as if it’s backward compatible and we can
go forward and that that would be our recommendation. And I’d add one more word
to what I had suggested before in terms, I had said pending verification by
HHS, I’d say pending independent verification by HHS, independent from NCVHS.

MR. REYNOLDS: There’s a lot of head shaking no.

DR. STEINDEL: I think we need to consider what Karen mentioned about this
idea of putting anything in about verification, I mean we just need to state
what we heard in terms of the testimony and we can go as far as we want with
that and say based on what we heard 8.1 is backward compatible with 5.0 and we
would recommend that the Secretary do whatever the wording is in the
regulation. Because whenever we send a letter to the Secretary the first thing
the Secretary does is hands it to a group and says okay, is the NCVHS correct
in their assessment, so the Secretary is going to verify it and we don’t have
to ask him to do it.

MS. TRUDEL: One point I wanted to make about this is that this is a process
that’s untested, it’s brand new from an administrative and an operational
perspective this is something HHS has never done before, and we will need to
sit down with our general counsel’s office and figure out what our due
diligence is. And so I think that’s our problem, I don’t think that’s your
problem so I think in this particular case it almost makes sense to say this is
what we heard, this is what we recommend, and Mr. Secretary you take it from

MR. REYNOLDS: And let it be in the minutes that we appreciate that.

Okay, Michael, you got enough to dance tonight? Simon?

DR. COHN: I guess I should ask, were you going to add some pieces around
here about the fact that it should replace the 5.0, should be whatever it is,
the 5.0, for the foundation standard functions? Or do we need to say something,
I think we need to say something along that.

DR. FITZMAURICE: I wasn’t going to say something like that because that may
require some regulatory action, another NPRM, if we’re going to change the
definition of what is in the regulation.

DR. COHN: Well, no, the reason I’m asking is because 8.0 is not backward
compatible, 8.1 is not backward compatible with 5.0 except for the foundation
standard functions, adds new functions —

DR. FITZMAURICE: Except for the medication history.

DR. COHN: Yeah, so I’m just sort of saying, I’m trying to think if I would
want to say that.

DR. STEINDEL: I think Karen actually addressed that earlier, when we do say
backward compatibility all that we are referring to is the foundation
standards, those functions. And I mean we can put that someplace in the letter.
I think one of our problems and a lot of this letter may have to be drafted
offline after we see the first go because I think some of the reg language that
we would want to embellish is best done offline. And that’s with the foundation
standard and that sort of stuff and exactly what we’re recommending, the exact
language and that sort of stuff.

MR. REYNOLDS: Karen, were you planning to join us tomorrow? Are you going to
be here? Okay, all right, we’ll work with it. Is there anyway we can get you a
copy of his draft in the morning for you to look at? And we’d love your
comments as soon as we could get them. Okay, thank you Michael, we appreciate
it, we look forward to what we see in the morning.

Okay, the next item I’d like to cover in the discussion is we passed out
today the ROI, five possible actions based on testimony we’ve already had.
Again, if one of our goals is to try by February to have a letter on HIPAA ROI
and tomorrow will be our last bit of information that we expect to get I just
want to make sure that everybody looks at these this evening and if you have
any changes to these that you in fact, that you at least let me have them so
that when we start trying to put together some kind of a draft letter, if I
don’t get many comments since we’ve already been through it once then we won’t
have to discuss it at length, they’ll be put in the letter. If there is
significant comment from anybody on any of these then obviously when we rewrite
a draft we’ll take all those comments into consideration.

But we’ve kind of adjudicated these once, I think there are probably some
minor changes, I know I may have a couple, but the point is I ask everybody to
look back, because I think these next two discussions we’re going to have, some
of these hearings have spread out over time and you really start to lose the
oomph of what we have so I’m also going to make, I’m going to throw out to the
committee a similar recommendation to both Stan and Judy that we’ve heard a
significant amount of testimony already on secondary uses of clinical
information and we’ve heard the same thing on matching patients to records, so
starting to build a bit of a portfolio on some of the things that we’ve clearly
heard, not that we’ve agreed on yet but if we don’t build these portfolios all
of us going back, because right now we’re keeping four or five subjects in full
speed, and trying to go back later and figure out everything we thought we
heard, I think it would be good.

So I’ve got the HIPAA ROI and I’ll keep up with this list plus what we hear
tomorrow, I would like Stan if you would do the similar thing for secondary
uses of clinical information, start building a portfolio of what we think we’ve
heard. Now obviously then it needs to be vetted with the committee but the
point is you’re building that portfolio and Judy the same thing based on what
we’ve heard, then I think that will be a good way for us to not lose kind of a
base sense of what we heard.

DR. HUFF: Sort of what we heard or my thoughts about what we might suggest?

MS. REYNOLDS: I’d like both, I think both, well basically what you’re doing
is you’re starting to craft the basis of a letter, you’re starting the basis of
the findings.

DR. WARREN: I was going to say, we want to base this on the format that we
developed in e-prescribing, right, which we did do a summary of what we heard
in the beginning of it.

MR. REYNOLDS: Here’s a summary of what we heard and then here are findings
and then recommendations is kind of what we’re doing. And again, I just threw
that out there, this is now open for discussion, I just tried to come forward
with a process. Jeff?

MR. BLAIR: Harry, thank you, I think your suggestion was very appropriate,
if we are looking at in particular the matching patients to their records and
if we want to try to pull together some recommendations by the April timeframe,
I’m just thinking here, I think it would be okay if Judy has for us the
portfolio and some maybe preliminary ideas of what some recommendations to
review at our February meeting. Do you think that fits? Judy, is that something
that you think you could do?

DR. WARREN: Well I think we’re still going to be hearing some testimony but
I’ll certainly have a preliminary draft.

MR. REYNOLDS: It starts the template, same thing with Stan.

And now having discussed that what we need to do is decide on I think both
of those issues, and back to Judy’s comment this morning, Judy said so what
else do we need to hear and I would echo exactly the same thing to our subject
that Stan has been shepherding through for us because obviously if we do get
some guidance on what we do next, we’ve had some testimony, we’ve begun to
learn the subject, we’ve begun to formulate thoughts, where are we going? So,
so far so good. Karen?

MS. TRUDEL: With respect to the matching patients to records, might it be
helpful to hear from Social Security Administration in that their identifier
was considered today to be key and also that they have probably the most
experience in the entire country in terms of assigning people to identifiers
and trying to avoid duplicates and trying to do matching algorithms and I know
they have a lot of information about statistics about duplicates and scrambled
records and things like that. So in terms of dealing with very large national
files I think they probably have a good story tell. The VA has a master patient
index, it’s internal but it’s pretty large and that might be interesting. And I
think RxHub has some kind of way to route patients to their health, we heard
from them, okay, sorry.

DR. STEINDEL: With regard to VA and maybe even DOD but we were also talking
about this a little bit at lunch about who else to talk to, it might be useful
to talk to some of the, and I lump those two in this category, the EHR
vendors/users, who have large systems, on how they match patients to their
records internally. And VA and DOD would be one subset of that so I think it’s
a very good recommendation. And Social Security also I think is wonderful even
though I am familiar with their death records.

DR. COHN: I guess Karen actually made me start wondering whether there’s
activities outside of health care that we should be looking at and I was maybe
half seriously suggesting, if you think about it people like credit card
companies are doing similar sorts of activities on a relatively large scale.
Now as I say that I have to admit that I carry a number of them so I’m not
potentially trying to say we’re uniquely identified but once again we ought to
at least think if there’s other industries that there’s some learnings from.

DR. STEINDEL: Simon, in that regard I will give you one industry and one
company that I’m very familiar with about the idea of matching goods to their
records and that’s FedEx. And UPS is another United Parcel Service is another
one of those two, incomplete addresses on parcels and they deliver them.

MS. GOVAN-JENKINS: In looking at outside industries I think I mentioned a
couple months ago Department of Corrections.

MR. REYNOLDS: Any other input to Judy?

DR. WARREN: Let me just say there were a couple of other things that I
heard, what people talked about over lunch. One of the things was maybe looking
at some of the large hospital organizations, some of the large multi-hospital,
multi-clinic places, that they’re certainly sharing information across people,
they might be appropriate to bring in. We had also talked a little bit about
whether or not it would be of interest to bring in a vendor panel to find out
how the major EHR vendors are going to be dealing with matching data. I have a
note here, and Michael’s gone, something was mentioned earlier this morning
about AHRQ and trying to pull things together, I have no idea what that means.
And then I think Steve mentioned that HIMSS was taking a look at becoming a
player in certification of algorithms. Do you know if that’s true?


MR. REYNOLDS: And Judy, what Michael said this morning, I went back to my
notes, there was a discussion on what is a national standard for matching and
that’s where he said that they are pursuing. And I wrote a note to what end, I
mean what are they actually doing.

DR. WARREN: Well that’s what I’m wondering and I think that’s what CCHIT is
and maybe something from the two of them.

DR. STEINDEL: I can tell you from the interim, because I’m involved with
workgroups and CCHIT, we haven’t even started to discuss that but I’m sure it
will come up on the radar screen when they start talking about architecture

DR. WARREN: Any idea when?

DR. STEINDEL: The architecture certification is not on the plate for two
years, a year and a half now.

MS. TRUDEL: What Michael was talking about was AHRQ’s National Resource
Center, which is sort of a clearinghouse for best practices for HIT, and so I
think he was thinking about best practices in terms of matching algorithms, and
there he is.

MR. REYNOLDS: Okay, any other, Judy?

DR. WARREN: I guess my next question is how much time do we have, because I
know our February meeting is short, would we have time for one panel?

MR. REYNOLDS: Well I guess the question maybe to Simon and Karen, obviously
one of our goals is to do things in the right timeframe and as we look at
everything that’s going on from ONC, everything that’s going on through CMS,
everything that we’re doing, what is the kind of timeframe, Simon, that you and
AHIC and everything else, that you, because I’ve seen some of the, they call
them the quick hits or whatever they call that out of AHIC, some of these
things may have been mentioned, so what do we see as a timeframe? So I guess
Karen, or Simon and Karen, is this, what kind of timeframe do you think is
important to have some kind of a preliminary thought on what we feel about

DR. COHN: I’m sure Karen will have additional comments, I think I would
observe first of all my memory is is that the timeframe was actually more
driven by Jeff then by any of the comments that any of us made in terms of
this, and I think it may have been really his perceptions in relationship to
RHIOs and that was what was sort of driving that.

I actually, and I will apologize, I missed the very beginning of the NHIN
conversation, but I think that this is one of those sort of fundamental NHIN
building blocks so we should among other things engage them and talk to them,
and once again, he may have mentioned it early on, I don’t know if he did or
not but it seems to me to be one of those early conversations to get a view of
where they’re going, what their timeframes are, I mean I think that that to me
is something that should help feed into that.

MR. REYNOLDS: Well what they said is all these awards are 12 month and by
summer, I think so if we could do something in the April timeframe to where we
could start building a draft because our full meetings are what, February —

DR. COHN: And June.

MR. REYNOLDS: And June, okay, if we could shoot to have maybe a letter in
June then that may coincide really nicely with where the pilots are from NHIN
and give some good in put. That may be a good target —

DR. COHN: But we might also want to ask them to come and talk about their
approaches to all of this stuff knowing that some of them may be things we’ve
already seen.

MR. REYNOLDS: Are you good with that? That appropriate? Judy, you have —

DR. WARREN: We have a day and a half in February so how much of that time,
we need to think about it after we talk about the rest of our things, and I
think we have a meeting in April and so by April we should be —

MR. REYNOLDS: I would guess we probably ought to have a full day in February
because by our meeting in April if we’re not writing a draft we’re not giving
Simon a letter for the full committee in June that’s for sure.

DR. WARREN: So I’m wondering if in February we need to bring, you’re saying
bring ONC and AHIC in to find out where they are?

DR. COHN: No, I was actually had a suggestion you bring in the NHIN people,
I think it’s really question of what are their approaches to this issue and
what do they see as the problems —

DR. WARREN: And who are those? Are you talking about from the federal
perspective or the people who are trying to do this?

DR. COHN: Well, I think that either the contract, the person who is managing
the contracts or the people who actually have the contracts, probably we could
ask who would be best able to describe the issues and approaches that they’re

MR. REYNOLDS: Is that something, Simon, that would be worthwhile for you to
touch base with David before we do it or you feel comfortable with us just
approaching them? That’s a question, that’s not a concern, that’s a basic

DR. COHN: It actually didn’t sound to me like we were approaching David, it
sounded like we were approaching Lemont.

MR. REYNOLDS: You said them, I thought we were going to straight to IBM and
Grumman and the others.

DR. COHN: Oh, no, no —

MR. REYNOLDS: That’s fine, I wanted to understand where our tie in was so
that we didn’t start calling them and then all of a sudden they start calling

DR. COHN: No, I think we would deal with the contract manager.

MR. REYNOLDS: As you’ve told me many times we’re entering new ground so I’m
trying to figure out —

DR. WARREN: [Comment off microphone.]

DR. COHN: Well that’s right but I think that basically —

MR. REYNOLDS: Judy, you set?

DR. WARREN: I’m set.

MR. REYNOLDS: Do we want to talk a few minutes on the clinical, matching of
clinical, Stan, where do you see, what do you see our progression there and
what kind fo timeframe are we considering there so we can make sure we have all
this on our plan?

DR. HUFF: Well I think this other area is more strategic then, so I think
we’re right to focus in the short term on the linking of patients to their
data. My thought is that the work probably should be split between the
subcommittees and we’ve never talked specifically though we started kind of
hinting about it last time. It seems to me that the, that this subcommittee,
Standards and Security, could most directly pursue recommendations relative or
thoughts about whatever we wanted to do about it relative to auto coding and
the sorts of things that we heard from AHIMA and things related to automated
coding from SNOMED and SNOMED encoded clinical data, that sort of stuff. And
that a lot of the other activities probably relate more directly or could be
pursued as appropriate goals through the Quality Committee. So I mean that’s
sort of the one dope vision of labor at least that sort of occurred to me
around those subjects.

And then I haven’t thought probably as far, and I would invite in fact
suggestions about who else we should have testify or what else we need to know.
It may be that the best way to direct that is to sort of again do the summary
and start making some, a list of what we might recommend and that might clarify
where we needed either further testimony or additional information to confirm
what we might recommend. But right now I don’t have any thoughts —

MR. REYNOLDS: No, that’s fine. Simon?

DR. COHN: I guess I had some concrete suggestions and thoughts on this one.
I knew that we were sort of in some ways still trying to figure out where we’re
going on this one but at the same time I think that there’s some very concrete
pieces here that we’re talking about and I think you began to touch on them, I
mean underlying this is this issue of mapping which I think Vivian and NLM are
supposed to brief us on exactly how well the mapping is going and what’s going
on with all of that and I think that’s one of the sort of foundational things,
once again the problem is saying I think that needs to be probably in February,
at least hearing an update on how things are going.

From my view the real question, and once again I’m sort of referring to the
area because I like your division, I think that makes a lot of sense, but the
question I keep asking is well, I mean first of all how is the mapping going
and then the other question of course is well even if we had the mapping great
how far does that get us and what other barriers are there, what’s the next
barrier and what’s the next barrier because I’m pretty certain that even if the
NLM had a compliant map, which of course in and of itself is going to take a
couple of steps, I have a feeling that you wouldn’t be able to sort of turn on
your system and have automated submissions to your choice of payer with
virtually no secondary review. But as I say that that’s just a supposition, I
may be completely wrong on that and you may have actually, that may be the only
issue that’s standing in the way but I think that’s really, it’s sort of like
what is the next step after that that’s the problem. Is that a reasonable set
of questions? I mean I agree with you, we don’t need to solve that in February
but certainly hearing how the mapping is going would be I think very

MR. REYNOLDS: And that was actually a timeframe we were shooting for on NLM
was February. Steve?

DR. STEINDEL: This addresses Stan’s comment about for the work that may be
done by other subcommittees and he mentioned Quality as a workgroup and another
group that came to my mind was Populations. And there are two groups right now,
and they seem to be the poster children that we always throw and ask for
information, well actually one isn’t, there’s the VA and there’s the Indian
Health Service that are both using information from their electronic health
records for population health statistics monitoring and decision making, and
Populations might want to hear from them on how they’re using that. I suspect
though I’m not 100 percent certain that there’s also a similar system in DOD.

MR. REYNOLDS: Okay. Is there any other subject —

DR. HUFF: So how much time did you want to take for this in February? Is the
only thing we want to do is encourage a report from the NLM or is there —

MR. REYNOLDS: I think from the standpoint of time if we’re going to give
Judy a day and we’ve got a day and a half that would leave us a half, a quarter
of a day to do something else, get an update on any of this other stuff that’s
going on. If we feel that we need to close, the ROI letter would be what we
would use the other point of the time for, so maybe NLM might be the only thing
we could fit in. So we’ll leave a little bit of time the first day, after
Judy’s finished we’ll have a little bit of time the first day to look at the
letter and then part of the second half of the next day to finish the letter so
that then it’d be, well, I’m sorry, February is after, it’s after the full
committee —

DR. COHN: Were you thinking about doing the letter for the February meeting?
What about conference calls —

MR. REYNOLDS: That’s exactly right, we’re just trying to work this out here.
So Stan maybe NLM and one other might be, think about what they are and if you
get an idea of what that might be then let’s talk about before we finalize the
schedule. Steve?

DR. STEINDEL: Harry, I have to put on another hat and embellish my comment a
little bit with the Indian Health Service because their population health
systems won a Public Health Davies Award this year that will be given to them
at the HIMSS meeting.

MR. REYNOLDS: Okay, is there any other business that we want to conduct
today? Dinner is at 7:00 at Magiannos and thank you very much and we stand

[Whereupon at 5:20 p.m. the meeting was adjourned.]