[This Transcript is Unedited]

DEPARTMENT OF HEALTH AND HUMAN SERVICES

NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS

QUALITY WORKGROUP

September 14, 2006

Hubert H. Humphrey Building
200 Independence Avenue, S.W.
Washington , DC 20001

Proceedings by:
CASET Associates, Ltd.
10201 Lee Highway, Suite 180
Fairfax , Virginia 22030
(703) 352-0091

P R O C E E D I N G S [8:14 a.m.]

MR. HUNGATE: I want to begin by welcoming everyone and especially Mary Beth Farquhar who is our newly appointed lead staff. Let’s go around the table, what I’d like to have Mary Beth talk a little bit about her background and introduce yourself to everyone as well.

For the record I’m Bob Hungate and on my left —

DR. COHN: Simon Cohn.

DR. STEINWACHS: Don Steinwachs.

DR. CARR: Justine Carr.

MR. LOCALIO: Russell Localio.

MS. GREENBERG: Marjorie Greenberg.

MS. JACKSON: Debbie Jackson.

DR. SCANLON: Bill Scanlon.

MS. FARQUHAR: Mary Farquhar.

MR. HUNGATE: And over the side Dan Rodey(?) from AHIMA.

Very good, welcome all and Carol McCall will call in as soon as we can. So Mary Beth, why don’t you just start? I was particularly pleased to hear that Mary Beth’s start was in performance measurement and I think it’s entirely appropriate to this committee and so my heartfelt welcome to the position.

MS. FARQUHAR: Thank you for having me, we enjoyed coming along and collaborating and you guys are really helping us with the quality repairs(?) at this point.

I started as a nurse, I did intensive care, I worked about everywhere in the hospital except for pediatrics, then I went to home care and worked in pediatrics so I have gotten a lot of experience in a lot of areas. I’ve been in the health care field for about 25 years, I went from ICU nursing, adult nursing to cardiac to ER to home health care to policy, health policy. I have a masters in nursing and I’m working on my Ph.D. in public policy and public administration, specifically performance measurement and the politics of performance measurement. It is a little hairy topic and very interesting.

So that is kind of my background in a nutshell, I’ve been at AHRQ, I started at AHRQ as an intern, I worked on the CAPS(?) project for about two to three years, I told them I’ll see you later and they said where do you think you’re going and gave me a position there on the CAPS and then I applied for the quality indicators which I’ve been working on for about a year and a half now. So that’s where I’m at in my life.

MR. HUNGATE: Okay, wonderful. Now I don’t want to duplicate what you’re going to do for the full committee so you pick what we need to deal with here in our context and take it from there and there are some follow-on things that you and Carol had talked about and I hope we can get Carol linked in but we’ll see. Doesn’t feel promising.

MS. FARQUHAR: Well what we had been working on with the quality indicators and the whole impetus of involving you people or this group was submission of the quality indicators to the National Quality Forum for consensus development process. What we have done is a lot of prelim work, we have about 84 indicators, we’ve tiered them according to the evidence base that’s available and made the tier one the strongest down to tier four, the ones that need some work and need some extra shoring up shall we say. What we ended up doing was submitting 32 of the indicators, a subset of the patient safety indicators, a subset of the pediatric indicators, and a subset of the inpatient quality indicators.

While we are having a number of workgroups that you all have kindly been involved and been very, very helpful we decided to do a workgroup on risk adjustment, right now the risk adjustment for the quality indicators is the APRDRGs, we have an agreement with 3-M so that that’s available online for people free of charge so that’s all incorporated into the tool that we have available from our website. But we wanted to make sure that the risk adjustment that we do have is the appropriate one for the quality indicators and is the best that it could be at this point.

One of the histories of the quality indicators as we constantly refine the indicators to the point where they are the strongest as they can be for administrative data and that’s what we’re talking about here is administrative data.

We also had three workgroups actually on the composite measures, we did the prevention quality indicators which is the area level indicators that we just wanted to have done because the National Quality Report needed them as well as we got requests from commonwealth to provide them to them. But we’re not submitting those because that’s not a focus of the NQF at this point, the area level, so those are not submitted at this point.

We are working on the patient safety indicator composite which is gone out for draft I believe, or is just about out to draft, and then the inpatient quality indicators. The patient safety indicators is one composite, just overall patient safety for a hospital, and then the inpatient quality indicators are mortality based on procedures and mortality based on condition I believe.

Those are the ones that we’re dealing with now, I have draft reports for those and then we also have a reporting template, and we’re going to submit the composites, we’re also submitting a reporting template that basically takes the quality indicators and puts it in layman’s language so that a consumer can look at it and say okay tell me what metabolic derangement is in my terms and it will talk to them in their terms, so that’s a tough one. So that’s going to go in and that’s another issue with the NQF that you may be able to help me with because they’re trying to figure out a way, this is brand new to them, this reporting template, they’ve not done it before, they’re trying to figure out a way to have that run through the consensus process and have that available as a package for these quality indicators. So we’re looking at putting in kind of a framework type of issue so ideas would be really appreciated at this point because I don’t think they know what to do.

And then we also had an evaluation of the quality indicators by the RAND group and they did a brief presentation to this group to start, they’ve done a draft report, we’ve gotten a lot of information, what we don’t have is strategic planning from this group. Everybody they talked to, they talked to about 65 individuals or key informants and basically they think AHRQ is wonderful, it lends credibility, these are scientifically based, we want more of them, we want you to provide more service, but we don’t know where you want to go next. So ideas like that, those kinds of things and that’s kind of what we’re looking at now.

Carol and I talked a bit about what possibly this group can do to help out and basically we can vet the reports here that I think that will be really helpful but the thing that I’m really excited about is the reporting initiative and looking at the reporting from using administrative data versus using any kind of data and what some caveats are for public comparative reporting, comparing hospital to hospital, and do some kind of a hearing on that, that’s kind of what we talked about. RAND was going to do another presentation and maybe you guys can opine on that as well, composite measures, again we can do, I’m open to suggestions on those, and risk adjustment definitely you all can help here with that. I have a draft report and I opened it and I closed it very quickly yesterday because it had all these funny symbols that I had no idea —

You saw that too?

— [Laughter.] —

MS. FARQUHAR: So that’s it in a nutshell and that was my five minute spiel.

MR. HUNGATE: So the floor is open for discussion of where we go from here.

DR. COHN: Maybe I can just ask maybe some fundamental questions which are obviously we are involved in this almost described as first phase activity, memory from my earliest conversations with Caroline Clancy was is that this was supposed to be a real fast track effort, was supposed to be done I think in August, submitted to NQF sometime in August and all that. So what is the real timeline for all of this stuff so that we can begin to figure out what we’re doing in relationship to it?

MS. FARQUHAR: We did submit in August, they put out a call for measures, August 1st or something like that, they did put a call out for measures on August 6th so the call for measures closed on September 7th so we submitted on September 7th, so right now they put out a call for nominees, they got hundreds of people to sit on these committees and what we have planned and we have a contract with NQF to have this stuff done, which I’m the project officer on, but what we have planned is a steering committee and then five technical advisory panels, what they call TAPs and basically those will cover every area. We have one for surgical complications, aka patient safety, we have one for pediatrics, we have one for the composite measures, we have one for the reporting group and I forget the last one, probably mortality and those kinds of things.

So that’s the plan, they have asked for nominations, they got hundreds from what I understand. We put in our two cents and I’m hoping that people here did some things too like Justine and a few of you guys. But we haven’t had any selections yet based on the fact that they are changing their policies, their policies were a little bit, before Janet Corrigan came on they were a little bit conflicting shall we say, it was basically I as a developer could sit on a TAP and be able to opine and vote so it was a little bit of a conflict of interest. Now they decided no AHRQ can’t be on that committee, so I’m a liaison to the steering committee but I can’t, I can attend the TAPs but I can’t really participate other than answering questions.

So I asked them to hold off making selections for the committees because you don’t know who is going to submit measures, the call for measures, so if I have to play by the rules then everybody else does too.

From what I understand Leapfrog will be submitting measures and the Pediatric Quality Alliance will be submitting measures, the only two I’ve heard at this point. Regardless, the call for measures was very, very focused so we wouldn’t have a zillion measures to go through, so that was the whole impetus of it.

So the timeline is moving along, I don’t expect them to convene or select members until maybe the next month, October, and then maybe, maybe convene by the end of the year, the steering committee, and then the TAPs will get their charge. We’re looking at a two year process here so the first thing they’re going to do is do the measures first, then the composites, and then the reporting template which makes sense so it will be kind of a phase thing over a two year period. So our contract with them ends July 31st

’08, and that was a lot of negotiating.

MS. GREENBERG: It might be a post-BBQ haze here, so AHRQ has submitted measures to NQF, did you say Janet Corrigan has gone there now?

MS. FARQUHAR: Janet Corrigan took over as the president and CEO and they merged her old group with that group so it’s kind of two —

MS. GREENBERG: I knew she’d left and kind of set up her own group.

MS. FARQUHAR: Some kind of federation, it was all the higher ups going and all them, the big employers —

DR. STEINWACHS: This was a merger of NQF —

MS. GREENBERG: But then you also have a contract with NQF to vet the measures that you submitted to them, and so then the people who develop the measures can’t sit on the panels that will vet the measures, they could vet other measures I guess.

MS. FARQUHAR: Probably.

MS. GREENBERG: But so then the people from NCVHS who helped in the process of the measures that you submitted to NQF obviously can’t be part of the process of vetting those measures?

MS. FARQUHAR: I’m not so sure that that’s true, they’re talking specifically about measures developers, the people that developed the QIs are Patrick Romano, Jeff Geppert(?), Stanford and those folks out there and they called me and said basically were they part of the development team and I said yes, and they said okay fine and they struck them.

MS. GREENBERG: But if you just were reviewers, you could maybe be reviewers as well, so that’s what I was wondering if there was a role for committee members in that regard.

DR. COHN: As interested as I am in the process or whatever I’m sort of thinking well geez, there’s many roles that the NCVHS can play, I’m not 100 percent certainly that our best role is to be part of workgroups in NQF, that may be exactly the right role but the question is is are there crosscutting issues that some additional focus would be valuable for and I think that’s, I mean something more public hearing, something that thoughtful input that really looks at things a little more globally would be of value and I love linear regression as much as the next man but, I was telling Marjorie Greenberg that I got an A+ in biostat, now of course remember linear regression about this much but, this was like 25 years ago —

MS. GREENBERG: That must be why they put you on this committee.

DR. COHN: I think you’re right. But having said that I think the real question is trying to find the sweet spot and really hope we can add value, which I think is really sort of the question you’re asking.

DR. STEINWACHS: You were sort of suggesting that the quality reporting, that sounds attractive to me.

MS. FARQUHAR: As far as administrative data is concerned you’ve got lovers of the administrative date, you got haters of it and basically it’s one of those things that it always comes up, well it’s administrative data, we’re just going to dismiss it right away and it has come up recently a lot in public reporting and comparative reporting.

Now what’s going on at this point is that we have a national level going on with public reporting with the Hospital Quality Alliance and the Ambulatory Quality Alliance and they’re doing their thing. Then we have the states doing their own thing and what they’ve done at the states is take the quality indicators, nine states right now are reporting the quality indicators of comparative reporting including some of the patient safety indicators and those kinds of things and a lot of them from what I understand from NADO(?) that a lot of the states now have legislative mandates that require them to do comparative hospital reporting so they’re looking for measures and the easy fix, administrative data is cheap, easy, quick, and they’re looking to AHRQ because they believe AHRQ is credible, which AHRQ is credible, and also that they’re scientifically evidence based and that’s why they feel comfortable with it.

MS. GREENBERG: Aren’t they trying to have, produce measures out of the HCUP data?

MS. FARQUHAR: That’s exactly where they were produced out of. My future vision is later on is connecting other databases and using some other databases like pharmacy and those kinds of things which you’ll hear today.

MS. GREENBERG: And actually though in the next year or so the HCUP data should be improved with the secondary indicator that the committee recommended the indicator on whether secondary diagnoses were present on admission. That’s being, well, on the UB-04 it’s being implemented next spring.

MS. FARQUHAR: Well we put it in the software for, we revise the software and update all of the codes every year so that’s already in the software so it’s available for folks that want to do that kind of thing. So we’re trying to think ahead a little bit, there has been and you’ll hear, I’m not going to repeat a lot of it but there has been a study that Ann Alexhouser(?) had completed about adding clinical components to administrative data which makes it oh so much nicer and a bigger bang for your buck. Vital signs on admission, lab values on admission are two easy things that can be put in there and that really make the indicators a lot stronger so we’ve made provisions for that within the software as well.

DR. CARR: Just a footnote there, it then means that electronic health records have queriable fields for vital signs on admission and that’s not necessarily true, so this is sort of the interface if we want to get to this kind of data just having electronic health record doesn’t do it, it’s got to be in a queriable field.

DR. COHN: I don’t mean to be argumentative, as another organization that’s implementing electronic health records to say that things have to be automatically processable out of electronic health records isn’t necessarily always have to be the case, I mean the fact that things are so much more readily accessible, pull them up and look at them immediately does get you half to three quarters of the way there. For example we do audits, I mean I think there’s a real fallacy going on that somehow clinical data is uber wonderful, I mean we have audit teams all over our organization pulling things out of the body of text because things don’t make it into the coded fields, etc., etc., and the auditing is wonderful because you can get to it so much easier.

DR. CARR: When it’s electronic, yeah, but I think that is exactly right, we have to figure out, I think in some ways people think electronic and it’s automatic but they’ll be some elements that are good, lab values will be good, as John was saying yesterday even with those data elements that you still need the whole story, you need glucose over time, a constellation of things. But I think it’s important to state that it’s not an automatic feed, like if you want the administrative data you just take it, you don’t have to talk to anybody, you don’t have to read anything, there it is, as opposed to electronic health record where it may be now you can read something online instead of going to the medical records office.

I think thinking about what those data elements are going to be will at least inform in some ways how electronic data is collected and there’s some things that are never going to be, lend themselves to automated queriable fields but there are some things that routinely come up that probably ought to be thought about.

There are two things that we had talked about, one is for the time as long as we’re using the administrative data, the validity of the data at the starting point will ultimately affect this and I know with a lot of the data manipulations, looking at signal to noise ratio and so on, we’re trying to account for that. But the noise sometimes comes for the fact that if you say deep venous thrombosis it might be a clotted IV or it might be a real DVT, so that’s noise you can’t do anything about. But then there’s noise as we talked about it, our hospital we have a whole initiative about line infections and every kind of thing so if someone writes in the chart possible probable line infection and then we have a whole committee that reviews them and concludes that it’s not a line infection but nobody goes back and writes that in the chart and so then you have this kind of variability across hospitals because we do it now, we suddenly realize that even our internal rate was going down our publicly reported rate wasn’t changing and then we recognized that if you don’t close —

DR. STEINWACHS: See if you did it in a management world the public reported rate would have been going down —

DR. CARR: So I had said in terms of toolkits to get, to have hospitals believe in their data they need to close the loop internally. Right now the doctors write their notes to their colleagues and they’re not thinking in terms of the words I use and the closing diagnoses and all that is going to create my profile. And there was a time where doctors had to sign the attestation, you would see everything that went out under your name, you could say that happened, oh no, that turned out not to be pneumonia and you could fix it, and they never see that now.

And the other thing is just coders can code the word hematoma so it can be a bruise or it can be a retroperitoneal bleed, both are hematomas, and in one hospital they might be coding every little thing and in another hospital they only code it if you need transfusion. So there’s a lot of important nuance now to what those codes are so in our institution there are certain trigger areas that don’t go out until a validator has looked at it because it drastically impacts our public reporting.

So as we talked about it a couple of months ago I guess but just I think having a toolkit so people at least know the rules of the game, I’m going to be judged on these things and so do I have internal reporting that’s correct.

And then my final thing is just that at the end of the day when we have these things is there some way that we can say okay here’s the composite measures for Mass General or Johns Hopkins or UPENN and here’s the public reporting for a hospital that’s been on UMASS cardiac surgery program or something like that. Is there a face validity test that the hospitals that we would expect to have high rates, or a Baldridge Hospital in St. Louis, whatever that one is. Some hospitals that have external measures, they’re JCAHO scores were perfect, they got a Baldridge, they have the best performance, does the composite measure align with that? And maybe it’s going to be very good because it’s going to uncover something we didn’t know but I think that’s part of the challenge for the community is say I don’t even know what they’re talking about or if you see that the hospitals you expect to be good look good and hospitals that are on conditional probation should come up looking like they’re in probation —

MR. HUNGATE: Question, I’m listening and guiving(?) what you’re saying, now I’m wondering can we talk the RAND evaluation that comes from your work and use it as a springboard to get into those discussions of the kind of details you’re talking about?

DR. CARR: We talked about it, and we also talked about it, and I don’t know if they went back and did this, the present on admission, so if you do the composite measures without the present on admission and take the same hospitals, incorporate the present on admission information does it change in parallel or does it make it different, tell a different story. So I’m only responding to you saying that the biggest battle is the acceptance and these are some of the things, the lived experience of what we’ve come to understand of how we can, another thing is you have, and then I’ll stop, but some people have nine spaces they can fill in, submit, for diagnoses, and others have 15, well a hospital that shall remain nameless only did nine for a number of years and got many awards for that and then once it went to 15 fell down to the middle of the pack, and so there are all these kind of random events —

MS. GREENBERG: How to lie with statistics.

DR. COHN: That’s actually not a random event —

DR. CARR: It was not an intentional event, say fortuitous unintended consequence —

But you’re saying there are competing incentives, when CC capture was one those CCs went to the top enlarged heart is not a significant CC but it is a major co-morbidity and enlarged heart was falling to the bottom and that would very much affect your risk adjustments.

MR. LOCALIO: I want to pick-up I think on where Justine at least started, I’m very skeptical and based on a lot of experience. First of all in most of these initiatives there’s never a statistician anywhere and the public reporting that goes out is done incorrectly without any regard to principles of statistics. This has been happening again and again and again and again. I can remember Ernie Sassa(?) who was the first executive director of the Pennsylvania Health Care Cost Containment Council said in a public meeting I disagree with Russell Localio on everything he says and I have said to people that is the greatest compliment that has ever been paid to me in my career that somebody like Ernie Sassa would say that because the legislation was ill conceived, the organization is staffed by people who don’t know what they’re doing, all kinds of problems in the data, gaming, gaming, gaming, all over the place, every time I turn around I see people gaming on the data.

DR. STEINWACHS: Has that improved?

MR. LOCALIO: I think it’s worse, I think it’s worse, it’s like new tax laws, you put in a new tax law and all of the tax attorneys in the world descend upon it to find out how they can get around it, and they do, the good ones do. The ones who are honest and just put in a good return make their clients look poor.

So I’m so skeptical and I get involved in this again and again and again because people keep coming back to me and they say would you look at this, example, antibiotics within four hours of admission for community acquired pneumonia, seems reasonable, I think that’s one of the standards, at least it used to be. So what happens? Hospitals say gee, that’s one of the standards I have to look good at it so anybody who comes in the ED and might possibly have pneumonia they say dab some topical antibiotic on them and they check it yeah, two hours. Well that doesn’t do anybody any good, that doesn’t do anybody any good. In Pennsylvania the entire effort of the Cost Containment Council I would regard as a total utter waste of the taxpayer’s money, it’s just been awful.

So I hope at some point in the process somebody will figure out that you got to involve some statisticians and there have been many good statisticians who are much more skeptical than I am, even at your institution, talk to Tom Lewis about it, he’s done some wonderful work —

MR. HUNGATE: Bill?

DR. SCANLON: I share some of your skepticism but I’m not sure as extreme in terms of it sounds like your pessimism too —

MR. LOCALIO: No, I started out being an optimist and it was only after a lot of experience that I realized just how bad it could be.

DR. SCANLON: I think currently it is bad and I think you pointed out some of the issues, I’m not sure a statistician is all we need, I think we need sort of both statisticians as well as some type of sort of auditing function, Don’s point about you would have had much better sort of public measures if the right management was in place because that’s a real concern. But we also need to think of where the state of the art is right now and the point that Justine raised about what do these measures look like compared to if you got a Baldridge Award or JCAHO or something.

We did work on looking at the nursing homes, this was the first initiative that CMS did in terms of nursing home compare, putting out sort of quality measures but there’s also the survey results which are a whole lot more detailed than what you’ll ever get out of JCAHO in terms of understanding sort of what quality has been like. Now I say that last sentence advisedly. You got these two sets of measures, they don’t always agree, in fact there’s a lot of inconsistency, even within the sets there’s inconsistency, if you look at a set of ten quality measures you got hey they did wonderful on one and poor in another. In terms of consumers there’s a question of what does this information do for them, I mean how can you possibly pick sort of among these set of options when there’s contradictions within the set of measures and then when there’s contradiction between two, you can have a home that looks great on their QIs and really poor on their survey.

So that I think is a reflection of where we are today in terms of the state of the art and it largely relates to the fact that in some respects we’re testing different things, it’s kind of like if you think about class where you get the 100 questions on an exam, well, it gets done right, it gets done wrong, what’s the composite.

Now I salute you in terms of moving towards composites but we’re not talking about comprehensive, we’re talking about taking measures and trying to summarize them, but we’ve got so much further to go that this whole, how much we sort of vet on this exercise at this point in terms of payment or in terms of sort of certification is a real issue, I mean how much can we afford to say we trust this measure and we’re going to behave because we believe sort of in it, and it’s potentially dangerous if we go too far with the measures that we have today.

MR. LOCALIO: Just back to something Justine, I think the real merit in the effort is to allow institutions to evaluate themselves because then you don’t have the problems of confounding of these measures across institutions, you have the institutions are responsible for their own measures, they can look at them, whatever credibility they have in terms of how much effort they expend on collecting those measures, they can use, that’s the real benefit. But these report cards —

MS. GREENBERG: Well, I’m going to take a different approach, different from actually what I normally, what I’ve supported for a long, long time, this could be the BBQ effect, the wines that my husband selected.

This whole measuring is not going to go away and part of my portfolio, I’m devoted to evaluation and the importance of it and I think what Russell just said is very, very true, that if you develop a culture in which people are actually, want to track what they’re doing and to improve and to reduce errors and all that then that’s probably the best thing as Justine has said. But I really wonder if there’s enough emphasis on systems that will have a higher likelihood of having a good result and where you know what can go wrong and how you can intervene. I’m thinking of the slide show that Harry did at the NCHS Data Users Conference on e-prescribing and the impact of it in North Carolina where just giving people information that they never had before about interactions between medications, all sorts of errors related to medications which you know is a big area for errors, and at least his preliminary data show that it’s just really had some dramatic effect on the information that is now out there to avoid a lot of errors that were occurring. And also to if you’re talking about the value proposition to get generics used more when there’s no reason not to use generics but they’re incenting them, I mean when the generics are known to not have as good a result as something else. To me that may be where the people working in quality, you can’t stop working on measures and everything but there are all these problems, I think Russell is absolutely right about how to lie with statistics or not even use statistics, and I wonder what if there’s things the committee can do like the e-prescribing to try to make that case about some of these systems.

DR. CARR: I would just like to echo what Russ was saying, I think these have been shown to have great value because within an institution it makes you look internally, it makes you drill down and pay attention. I think the runaway train is the pay for performance initiatives by the payers, they’re so hungry for anything, they will grab whatever comes on the webpage, that’s it, they’re using it, without necessarily the nuances and all of that. I think the work of making the composite measures is terrific, I think that we have to, but we have to build the infrastructure so that the data that we have is valid and we have to have some face validity.

But I think if there were anything right now to say it’s that comparing across institutions has a lot of jeopardy but using change over time within an institution as a pay for performance initiative might have greater validity, so in the public domain apart from the good work that’s already being done the application —

MR. HUNGATE: We got six minutes left. Carol, we’re just in the point of discussing, we’ve been talking a lot about composite measures, we’re going to try to wrap up so listen carefully.

Let me ask you directly, is the content of this discussion additive to the arena as you see it or is someone else doing what we’ve been talking about here?

MS. FARQUHAR: What has been said has been said by a lot, by numbers of people, numerous people, that’s why we’re trying to address some of those issues at this point. With regard to public reporting I’ve heard over and over and over and over again do not let the perfect be the enemy of the good. Consumers need the information, I was starting to we are kind of beyond the point of leave it to the experts, so what we can do is bring in the statisticians, we can bring in everybody that we can possibly think of to get these vetted as quickly as possible, as thoroughly as possible, and have people look at the evidence and make their decision at that point, not just say because it’s administrative data it’s not any good.

MR. HUNGATE: Is there a work product result that tries to describe all the pitfalls —

MS. FARQUHAR: That’s what I was thinking with regard to the reporting and the testimony and having people come in, to start with public reporting might be the overall arching goal that would be really, really helpful, and some of the caveats with regard to any kind of data, not just administrative data, you go to the chart you see a bruise, it’s marked as a hematoma, you see a retroperitoneal bleed, hematoma, you’re right and it’s the same thing, it’s just a matter of the coders and interpretation, and it’s everybody’s interpretation, I as a nurse know that I can look at a patient and say your glascocoma(?) scale is three and somebody else will say it’s four, it’s a matter of judgment. And again, performance measures as I’m finding out when I’m going through all of the theory stuff is it’s a matter of values and whose values take charge and who has the most power to do those.

MR. HUNGATE: Could we label this in something like making indicators work in terms of what does a hospital have to do for it, how do we stage this in a way that it become useful?

DR. STEINWACHS: Are you thinking at the reporting end or are you thinking at the underlying —

MR. HUNGATE: Well I think for the reporting to be beneficial it has to have a customer that can use it which in an improve model you want to be the institution that’s trying to improve and the comparison over time maybe the right way to look at things. But it seems to me that there are some ground rules that as you say are getting talked about but are they put down in a what do you do about it way —

DR. STEINWACHS: The other piece which was attractive to me was no matter how good or bad these things are consumers are going to use them, physicians in the community may use them, others may use them just because, and so it seemed to me that something that tried to look at it which I thought might be one of those sorts of workshops is how do you make this the most useful with the appropriate caveats to different audiences and so if you’re going in for elective things you have a chance to use this, if you’re going in for emergency you probably just want to composite that says is this hospital really crummy around certain things or is it better if you had that choice. But even the issue of do you ever have a choice, after you call 911 when you go where you want to go, those things to a consumer it seems to me, some of these things are things that could be helpful if you could think about segmenting it and would be appropriate, what are the caveats, because it’s going to be there, it’s going to be in the newspapers.

DR. SCANLON: I think you’re right, exactly right in the sense that there is a responsibility, I mean when you say consumers are going to use them they’re not going to use them if they’re not there and there’s responsibility on the part of the people that are putting them out there to in some respects to make sure that the good is good enough even though we don’t want the perfect to be its enemy but we want this good to be good enough and I think that we always haven’t been there, that there’s been this rush to do something and that in the process we’ve sort of allowed sort of things to happen that are undesirable and we need to look back and say okay which of those things shouldn’t we be doing, what are the other resources like Russell said, we really can’t just say okay here’s the measure go do it, we’ve really got to think about the resources that are going to be required to make sure that the data are valid and then we can be more comfortable about consumers —

MR. HUNGATE: We’re going to need to schedule a conference call to continue this discussion and complete it to the extent of deciding what action to take, we can’t reach that point here, and so that’s I think a first step and we’re out of time. Michael?

DR. FITZMAURICE: I think this discussion has been very healthy because these are the kinds of questions that I would expect to come up at hearings and for NCVHS to make a thoughtful judgment about the value of these arguments against indicators, the arguments for them, and to reach some kind of conclusion that more research is needed, some are ready to go and others are not ready to do, it’s clear the consumers want them, the payers want to use them, and are they good enough to be used, are the data good enough to be used or what they measure, what they purport to measure, the same thing, what we think they measure, are they good enough for the consumers and good enough for the payers, are they good enough for the professionals to guide them for quality improvement, these are the questions that you’ve put on the table with your discussion today and I think this would be very valuable to address those questions, even if they can’t be resolved they can be raised and then over time you can get better and better answers.

MR. HUNGATE: I think the enumeration of the questions that are associated with each of the application is a contribution in terms of how do you know whether the indicator is good, I think for us to judge and say it’s good may be risky because of the vagaries of coding in a particular institution, so that somehow describing that vulnerability to coding and describing the process that must occur for the indicators to be valid and useful is maybe where our contributes lies.

DR. CARR: Because actually those caterpillar graphs that we’ve been discussing, they show you the low end and the high end, I mean it would be interesting to look at who’s in the low end and who’s in the high end on all of those and does somebody go from the best to the worst, or say for quality indicators compared to safety indicators, so you could actually be the best quality and the worst safety, I mean there’s sort of face validity questions that you would want to say that even though internally if you have your statistics done and your weighting at the end of the day does it makes sense, is the best hospital in the world in some remote island somewhere that no one ever heard of.

[Whereupon at 9:00 a.m. the breakout session was adjourned.]