DEPARTMENT OF HEALTH AND HUMAN SERVICES
NATIONAL COMMITTEE ON VITAL AND HEALTH STATISTICS
WORKGROUP ON QUALITY
June 24 and 25, 2004
– Minutes –
The Workgroup on Quality of the National Committee on Vital and Health Statistics (NCVHS) held hearings on June 24 and 25, 2004, at the National Center for Health Statistics (NCHS) in Hyattsville, Md. The meeting was open to the public.
- Robert H. Hungate, Chair
- Justine M. Carr, M.D.
- Peggy B. Handrich
- Donald Steinwachs, Ph.D.
Members of Subcommittee on Standards and Security:
- Simon Cohn, M.D., Kaiser Permanente
- Harry Reynolds, Blue Cross and Blue Shield of North Carolina
Staff and Liaisons:
- Anna Poker AHRQ, lead staff
- Stan Edinger, Ph.D., AHRQ
- Marjorie Greenberg, NCHS, CDC, executive secretary for the committee.
- Trent Haywood, M.D., J.D., CMS
- Julia S. Holmes, Ph.D., NCHS
- Debbie Jackson, NCHS, staff.
- Gail R. Janes, Ph.D., CDC
- Gracie White, NCHS, Staff
- Kevina Bracey, NCHS, Staff
- Marietta Squire, NCHS, Staff
- Laura Blum, JCAHO
- Rosanne Coffey, Medstat
- Kathryn Coltin, Harvard Pilgrim Health Care
- Jessica DiLorenzo, General Electric
- Nancy Foster, American Hospital Association
- Stanley Hochberg, M.D., Provider Service Network
- Jeff Kamil, M.D., Blue Cross of California
- Vahe Kazandijian, Ph.D., M.P.H., Maryland Hospital Association
- Edward Kelley, Ph.D., Agency for Healthcare Research Quality
- Jennifer Kowalski, AHRQ
- Jerod Loeb, Ph.D., Joint Commission on Accreditation of Healthcare Organizations
- Lisa Mims, America’s Health Insurance Plans
- Peggy O’Kane, National Committee for Quality Assurance
- Barbara Paul, M.D., Beverly Enterprises, Inc.
- Christopher Queram, Employer Health Care Alliance
- Karen Titlow, PT, MA, The Leapfrog Group
- Deborah Wheeler, AHIP
- John R. Lumpkin, M.D., M.P.H.
The Workgroup on Quality held hearings on June 24 and 25, 2004, to analyze stakeholder perspectives on the business case for enhancing administrative data for improved quality measurement and to solicit feedback on eight proposed recommendations contained in its May 2004 Report on Measuring Health Care Quality (http://ncvhs.roseliassociates.com/040531rp.pdf). The Workgroup received 13 presentations and talked with four panels about the values, the costs and the challenges of implementing the recommendations for data enhancement.
The eight proposed recommendations related to the following data elements:
- Selected Laboratory Test Results
- Selected Vital Signs and Objective Data
- Secondary Diagnosis Indicator for present at admission
- Operating Physician Identifier code
- Both Dates and Times for admission and procedures
- Episode Start and End Dates for services billed using Global Procedure Codes
- Functional status codes – review options
- Functional status codes – create mechanism for reporting
Panel 1 – Purchasers
Trent Haywood, M.D., J.D., CMS
Focusing on use of administrative data, Dr. Haywood expressed the conflict between building upon a foundation that was never intended to be used in that manner, rather than allocating energy and resources towards quicker adoption of EHRs. In general he was positive about the recommendations, responding most strongly to number one, two and six, although six might not be useful for pay-for-performance. He felt the best methodology for three will be an administrative fix, and that four’s value was variable. While five is important, it might not have the desired results and could be burdensome. He felt that obtaining physician participation in seven would be problematic, and eight would be very beneficial, but only with standardization.
Karen Titlow, PT, MA, The Leapfrog Group
Ms. Titlow noted the importance of standardization and collaboration between organizations for that purpose. She stated that almost all consumer or purchaser health care decisions are made by guessing, and that transparency is essential for the broadest access to standardized quality measures for provider comparisons. Ms. Titlow was positive about all of the proposed recommendations, emphasizing three and four to reach the granular level of the individual physician. She ranked the recommendations: two, three and four, with seven and eight, lower because they would take longer to put in place. Dates and times for admissions and episode start and end dates, were less important overall. Ms. Titlow correlated seven with eight and felt three, four and five together would have a “bigger bang for the buck,” as would one and two.
Christopher Queram, Employer Health Care Alliance
Believing that what gets measured and reported publicly gets improved more quickly; Mr. Queram promoted transparency linked with rewards and incentives. He emphasized administrative data, recognizing that the opportunity to use information outweighs the added burden of collecting it. He also reported that there is a window of opportunity to try to influence the National Uniform Billing Committee (NUBC) revision to the UB-04 to better support performance reporting and pay for performance demonstrations. His Alliance has developed six broad areas of administrative data expansions that match closely with Workgroup recommendations one through six.
Jessica DiLorenzo, General Electric and Bridges to Excellence
Bridges to Excellence is a multi-stakeholder approach to creating incentives for quality, Ms. DiLorenzo explained, with the basic belief that competition should happen at the individual physician level, which requires improvements in outcome and administrative data. Her program measures performance in the areas of diabetes and cardiac care, using NCQA standard rewards and measures. Regarding the Workgroup’s recommendations, numbers one and two are the highest priority for Bridges to Excellence and would ease the barrier in provider participation. The other she found to be focused primarily on the hospital side, so less relevant to Bridges to Excellence.
Panel 2 – Quality Measurement Organizations
Jerod Loeb, Ph.D., JCAHO
There is a legitimate business need for just about all of the recommendations, Dr. Loeb stated, adding that their ICU model needs many variables to calculate risk models. His only reservations were with times of admission and procedures, because the trend in the field is to use arrival rather than admission date and time, and he urged the Workgroup to reexamine this. He added that standardizing data elements will help to reduce push back and risk adjustment technique controversy. “Data quality should under gird everything we’re doing,” he stated. JCAHO believes the recommendations are sound and will result in acceptable or better quality data.
Peggy O’Kane, National Committee for Quality Assurance
Ms. O’Kane explained that administrative data will remain critical for the measurement of quality and can be enhanced through alteration of current data capture processes. When data count for payment, accreditation or public reporting, she noted, the data get better. She supported the set of recommendations and suggested the Workgroup pay attention to framing of functional status, also urging them to concentrate on things that are likely to change as opposed to things that are not likely to change, and also to clarify on coding forms when added elements are required to be coded during care episodes. She noted that EHRs will not be more useful than paper records unless they are structured and coded for abstracting quality purposes. She stated that a crucial part of advancing this agenda is making the case for specific gains: additional health measures and the priorities to which they relate, and the health benefit to be gained.
Edward Kelley, Ph.D., Agency for Healthcare Research Quality
The issue of timeliness is important to Dr. Kelley. He observed that EHRs will not soon provide quality analysis and quality improvement, and timing of conditions cannot be understood using current administrative data. He felt numbers two, three, seven and eight offer improved data and better risk adjustment. Number one offers important process outcome links although there are concerns about burden. AHRQ has value for effort concerns about numbers two, six, seven and eight. In general they feel ambiguity over recommendations that will make important data improvements but have a wide spectrum of associated costs. He discussed AHRQ QIs and how those may be used for public reporting.
Panel 3 – Provider Organizations
Nancy Foster, American Hospital Association
Reminding the Workgroup that hospitals are a diverse group of organizations with very different capacities to accommodate changes, Ms. Foster requested careful consideration before more data collection is imposed. She stated that hospitals see the value in their own internal QI work and in collaborating with other organizations, but not in publicly sharing the information. In spite of this, she felt that standardization is essential for data and measures and for data collection methodologies. She gave AHA’s full support to the Committee’s efforts. She pointed out problems in consumer misunderstanding of risk adjustment, cautioned that AHA understands quality to be a systems rather than an individual provider property, and called number three a great idea and extremely helpful, but with issues of condition identification.
Barbara Paul, M.D., Beverly Enterprises, Inc.
Dr. Paul spoke about the large overlap between hospital and long term care patients. She stated that oversight of long term care has produced a lot of standardized valid data, the MDS and Oasis datasets, and she emphasized recommendation one, noting that they need standardized laboratory result reporting in real time simply to provide care for patients. Administrative claims data is too late for their use. She also approved of numbers two, seven and eight and believes the recommendations can be implemented independently. The continuity of care record (CCR) effort contains the shell of what is needed, she stated. They are strongly committed to the end result of the Workgroup’s efforts.
Stanley Hochberg, M.D., Provider Service Network
Dr. Hochberg believes that, if pay for performance models are going to improve, it is also necessary to bring physicians back into valid data. He was strongly in favor of number one because it would improve HEDIS measures and gives administrative data measurement more clinical depth. To allow needed time to upgrade for this, however, he encouraged the Workgroup to consider gradual phase-in of measures according to what is most critical. Because it would be burdensome, he proposed deferring number two until clinical care IT systems improve. He was in favor of number three for several reasons and felt the Workgroup should move ahead with it.
Dr. Hochberg had mixed reactions to the remaining recommendations.
Vahe Kazandijian, Ph.D., M.P.H., Maryland Hospital Association
Adding academic rigor to the discussion was Dr. Kazandijian’s intent. He stated that the transformation of performance to quality through evaluation creates the challenge with some measures because it puts a value on a value free measurement. He discussed the importance of ongoing monitoring of measuring. Health status was a focus of his presentation, and he felt health status in itself should be part of accountability and true outcomes. In his early work in this area, they began calling indicators pointer dogs. He described a good indicator as a valid dog that points to a pheasant and not to rabbit when you are hunting a pheasant but he concluded that the real success is in training the user, not in giving people the best dogs.
Panel 4 – Health Plans and Insurers
Kathryn Coltin, Harvard Pilgrim Health Care
Ms. Coltin described how the data elements proposed in the recommendations would enhance health plans’ operational efficiency and effectiveness in quality measurement and oversight, medical management and clinical programs, financial planning and analysis, product development and marketing, and provider contracting and reimbursement including pay-for-performance programs. Her order of preference for the recommendations is 1, 3, 2, 4, 5, 7, 8, and 6. She asserted that there is not a one-size-fits-all approach that will work for quality data, and she believes that EHRs will enable more accurate and less burdensome transfer of electronic data from EHRs to claims transactions, but will not necessarily improve the completeness of the data.
Jeff Kamil, Blue Cross of California
With six years of experience in pay-for-performance programs, Dr. Kamil explained how quality data is used to make those programs effective, positive and popular. He presented the business cases that good measurement offers for health plans, physicians and hospitals. Having more data on procedure types and their outcomes would be beneficial for patients, he reported, noting that hospitals want value data against which to compare them. His organization would use quality data to develop networks of effective providers, and they feel recommendations 1, 2, 3 and 4 are the most important.
Discussion with Panel One focused on: concerns that pay-for-performance will increase disparities of care, the purposes of recommendation four, and the variety in testimony on the need for functional status reporting, and the serious challenges that lack of standardization raises for tracking functional status. Standardization efforts by NCQA and Consumer/Purchaser Disclosure Group were discussed, as were frequency of data collection and potential analysis problems.
The potential contribution of the Workgroup toward the development of EHRs was the first topic of discussion with Panel 2, followed by levels of specificity in the recommendations. The panelists did not perceive the creation of a mechanism for recording data as a productive step without more specification. The panelists shared their experiences and perspectives with physician profiling and accountability. Ms. O’Kane noted that there is not just one set of recommendations that is an ultimate solution, and some things will have a limited half life.
Further research on functional status was discussed with Panel 3. This could begin during the lengthy processes of adding and implementing new data elements. Requesting a briefing from people in charge of this area was another suggestion, noting the confusion caused by the many similar questions on different forms. The Workgroup’s biggest potential mistakes identified by panelists were: steering towards doing the wrong thing clinically by identifying the wrong thing to measure; failing to emphasize the business case; and focusing only on systems, rather than people. It was proposed that the Workgroup move forward on recommendations identified as low hanging fruit: costing little but having value to patients, providers and payers.
Beginning Day 2 of the hearings, Mr. Hungate stated that his goal for the end of the session was a tentative conclusion that could be presented to the NUBC. He asked all participants to share their assessment of the previous day’s testimony. Through heated discussion it was concluded that the testimony of the panelists varied regarding the recommendations, many calling for greater specificity. All agreed to support number three and that the Workgroup’s efforts should support development of the EHR. The purpose of the workgroup was described as not mandating data capture, but saying structures must be built so that capture is possible. Pushing for standardization was another core purpose. Mr. Hungate proposed that the group could articulate the spectrum of specific measures, that NQF and AHRQ will determine those measures, and say that there is promise in these measures if there are mechanisms. Dr. Carr argued for reevaluation and flexibility.
In the discussion with Panel 4, Ms. Coltin and Dr. Kamil shared ways to improve the business case: collaboration on data, sharing information with providers, and emphasis on straightforward administrative data. They agreed that when data collection is part of normal health care, incentives will work. The presenters also described their risk adjustment approaches and data sources. The Workgroup then moved to general discussion of the hearings. It was agreed that the next hearing will include CPT category two performance measurement codes, learning about the 837, the Designated Standards Maintenance Organizations (DSMOs), and talking to NUBC and NUCC to find out their activity on these eight items. Dr. Cohn stated that the group is still data gathering and warned that trying to make conclusions before having data can lead to the wrong conclusions. He suggested hearing from NUBC and NUCC before taking 3, 7 and 8 to the full committee in September.
Referring to this hearing as “Lurching toward Measurement,” Workgroup Chair Mr. Hungate welcomed all participants. The hearing was called to address the perspectives of health care purchasers, quality measurement organizations, provider organizations and health plans and insurers on the business case for increased measurement. Participants were asked to give feedback related to the eight proposed recommendations and consider what can be done now versus what actions should wait for the electronic health record (EHR).
Panel 1 – Purchasers
Trent Haywood, M.D., J.D., CMS
Dr. Haywood observed the constant tension between building upon a foundation (administrative data) that was never intended to be used in that manner, versus allocating energy and resources towards quicker adoption of the idealized design of EHRs. He noted that individual clinicians and practitioners have concerns about quality measurement from an administrative source.
The existing infrastructure with the familiar MDS dataset led CMS to begin with nursing homes. From MDS, Dr. Haywood stated, they can provide measures nationally that were pilot tested before being implemented. Stakeholders have concerns about continuing to rely on that dataset, which was not designed for quality reporting. CMS is now discussing how to improve the MDS 3.0. He noted that if the EHR was available as it is wanted, it would remedy a lot of these issues.
Dr. Haywood described one vision as working with the NQF as a consensus building process so that measurements can be made regardless of whether or not they were administratively derived. CMS looked at what the Joint Commission was already requiring, to avoid increasing the burden by building upon what was already available. They identified the ten measures for three clinical conditions that are on the CMS professional website. The next clinical condition, surgical infection prevention, has generated provider support as their first true patient safety module or domain.
Dr. Haywood noted that they are trying to move forward at a reasonable pace recognizing the demands on individual providers but also recognizing the need for purchasers and consumers to have information. He recognizes that many institutions are not set up for quality data measuring. CMS is expanding their infrastructure to support that activity and create the QIO data warehouse.
Dr. Haywood then addressed the proposed recommendations. For one and two, he stated that collecting selected live results and vitals are important and a priority for CMS; they would improve the ability to evaluate immediate outcomes and be used in pay for performance. He noted that all vitals aren’t necessarily equal but are important, adding that until fully implemented, CMS will try to get this data through an administrative process.
For number three, Dr. Haywood felt the best methodology will be an administrative fix. He fully supports risk adjustment and trying to look at complication rates. The value of number four would depend upon your point of interest, he felt. Number five is also important for assessment and pay for performance, Dr. Haywood said, expressing concern that some clinical issues may not necessarily get results in the exact desired data. He felt number five may be considered somewhat burdensome. If people are demanding that information and providers truly have to provide it, it would be a better route than chart abstraction.
Dr. Haywood believes that number six will definitely help obtain global procedure codes and disaggregate those, which is important for coordination of care and the continuum of care for the individual patient. He is uncertain of its utility for pay for performance. He called number seven “problematic, from the reality of trying to get physicians to go back, disaggregate, bill for something for which we know we’re not going to pay them any differently.”
The general sense Dr. Haywood gave for number eight, functional status, was positive. If standardization was in place around functional status, he felt it would be beneficial across the board to really look at true improvements of care.
Karen Titlow, PT, MA, The Leapfrog Group
The Leapfrog Group is a consortium health care purchaser, Ms. Titlow explained, noting the importance of close collaboration between organizations that are trying to standardize measures and pay for performance. Standardization progress by the Workgroup is critical for all efforts to move forward quickly. She attested that, without this level of data, they have a daily struggle to implement the most important ways to improve health care in the country.
Ms. Titlow noted that right now almost all consumer or purchaser health care decisions are made by guessing. She believes that transparency is essential to have the broadest possible access to standardized quality measures for comparison between individual providers. She explained that the burden for hospitals is the many variations in what is being asked. Standardization could make a huge amount of difference in encouraging the hospitals to report.
Ms. Titlow believes that this data is what patients need to be able to make informed decisions, which is a patient’s right. Purchasers have been buying based primarily on the cheapest possible deal, and she feels this is not an effective way to try to improve health care quality. The members of the Leapfrog Group have formally committed to buy based on the highest quality rather than the cheapest care. Therefore, she stated, “we need to know what high quality care means and be able to differentiate between high quality providers.”
Ms. Titlow was positive about all of the proposed recommendations. Number one is helpful for case identification, appropriateness of care, and tracking utilization. This kind of information would support partnership pay for performance efforts. The business value of number two, Ms. Titlow stated, is that it helps ease the tracking of intervention, and standardization will reduce the administrative burden on hospitals. She named number three as one of their top priorities, important for improving risk adjustment and identifying injuries caused by the hospital.
Number four is also important to reach the granular level of the individual physician, she stated, because they would prefer to pay individual physicians instead of shifting towards different health plans that are assumed to provide quality care. Ms. Titlow noted that this information would increase motivation and allow sharing of incentives between hospitals and physicians.
Although number five is relevant for evidence based hospital referrals and is one of the seven conditions on which they collect individual data, it is not tremendously important for Ms. Titlow’s purposes. But episode start and end date improves risk adjustments because there are very few standards of risk adjustment methodology.
As a former physical therapist, Ms. Titlow is “two thumbs up” on creating the functional status code. She noted that functional status is really essential for employees: when they can go back to work, how much they can lift, whether they can still drive cars. She acknowledged the Workgroup’s recognition that the standards must first be set, an agreement reached, and then the data collected. She felt the investment in standardized methods would definitely be worthwhile.
Ms. Titlow ranked the recommendations two, one, three, four, then seven and eight, because they would take longer to put in place. Five and six, while important and helpful, she felt were less important overall.
Christopher Queram, Employer Health Care Alliance
Mr. Queram introduced the Employer Health Care Alliance Cooperation a health care coalition with a direct purchasing/contracting model with health care organizations, not through health plans. They emphasize the importance of data and collect and maintain an administrative data repository for reporting cost and utilization and for comparative provider performance reporting.
“Our belief is that what gets measured and reported publicly gets improved faster,” stated Mr. Queram, explaining a study that showed that hospitals whose performance is reported publicly invested a greater degree of resources in improving quality. He and his organization are convinced that transparency moves markets and changes behavior, especially if linked with rewards and incentives.
Mr. Queram noted that administrative data is the only means available near term to move performance reporting. He asserted that consumers and purchasers cannot afford to wait until electronic means of gathering and reporting information are widely available. The opportunity cost of doing nothing dwarfs the direct cost of collecting information, he stated.
The National Uniform Billing Committee is now looking at changes to the UB-92 form, soon to be the UB-04 form. The Alliance convened an advisory panel that developed six broad areas of expansions of administrative datasets that match closely with Workgroup recommendations one through six.
These expansion areas are: secondary diagnosis codes present on admission earns a top order priority for differentiating complications from co morbidities and making severity adjustments to hospital performance evaluations. He noted that they are interested in unique physician identifiers for each hospital procedure, which allows an outcome assessment by hospital procedure lists and is more precise then retrospective linkages of records. A third desirable area is vital signs at admission, such as heart rate, blood pressure, temperature, respiratory rate, which are powerful predictors of mortality for common conditions. The fourth area is key lab values at admission, and the fifth area is do not resuscitate orders that are present within the first 24 hours. Lastly, they are interested in time of admission, discharge, and procedures to aid in understanding and reporting delays in treatment within acute care settings.
Mr. Queram shared that they do not know a lot about what works in the area of pay for performance, including the size of the incentives, or exactly how to motivate changes not only in provider behavior but beneficiary behavior. He does feel that is a time of critical importance in moving forward with experimentation. Purchasers are looking for strategies and techniques to begin to combine information with meaningful incentives to impact provider performance and change beneficiary behavior and decision making.
Jessica DiLorenzo, General Electric
Ms. DiLorenzo works not only for General Electric but also for the not-for-profit Bridges to Excellence. She reported that GE imagines a health care market where service providers are free to compete based on the published value of the services they provide and are rewarded for high performance. The transparency piece is going to drive this marketplace, in which consumers should be engaged, free to choose and sensitive to the value of the services they consume.
Bridges to Excellence is a multi-stakeholder approach to creating incentives for quality, Ms. DiLorenzo explained. It is purchaser funded and made up of large and small employers. She stated that in developing the program, serious discussion took place over two years as to what the measures should be, what the incentives should look like, what was important to everyone involved. The program is designed to encourage providers and patients.
Ms. DiLorenzo stated that the program focuses on office practices and has rolled out two specific condition areas, diabetes and cardiac care, in selected markets. It is set up with the rewards and measures that are standards of NCQA, which also administers the measurement sets. The program has made an impact in very different markets and has had virtually no push back from physicians.
The objective is to improve outcomes for patients, Ms. DiLorenzo clarified. She noted that the program requires measures that will be actuarially sound, so that there is a return for the purchaser and a business case for purchasers to get involved in moving the market. They want to improve the quality, but are also looking at the savings at the very individual metric level. She believes that this framework will break the gridlock.
Their basic belief is that competition should happen at the individual physician level by disease and procedure, instead of at the health plan and the network level. Competition cannot happen without robust comparisons and accurate data, and competition at this level is only possible through improvements in outcome and administrative data, Ms. DiLorenzo asserted.
Regarding the Workgroup’s recommendations, she stated that numbers one and two are the highest priority for Bridges to Excellence and would definitely ease the barrier in provider participation. She noted that, because of the value, providers are finding very creative ways to get through the application process for Bridges to Excellence or NCQA.
Ms. DiLorenzo reported that secondary admission diagnosis flag and operating physician would improve the precision of hospital performance measures, while dates and times for admission and procedures would allow for measures of timeliness of hospital care. Although some of the data are relevant to Bridges to Excellence, she observed that most of them are more focused on the hospital side.
In response to Ms. Poker’s concern over the quality of captured data and potential increases in disparity and access to care, Dr. Haywood assured the Workgroup members that they are measuring the accuracy of the administrative data they are using. He added that “You have so few measures right now because we don’t want to move so quickly the science isn’t sound. No one wants a system built upon something that’s fallible.” He shares the concern regarding unintended consequences and stated that real risk adjustment has led to incentive activities that may improve disparities. With that, there may now be a financial incentive to take care of patients that have more co morbidities.
Ms. Titlow pointed out the important distinction between concept and execution, explaining that it’s hard to be against the idea of collecting the data. Execution—how well are folks trained, how accurately do they report the data—is a very separate issue. “Even if a process won’t be exactly right, that shouldn’t stop us from trying to take the improvement steps that we could take,” she said. She emphasized that it is a fallible system, not a machine.
Mr. Queram commented, “Using information makes it better, then it becomes an incentive to improve the quality and the integrity of datasets when it’s used responsibly. Perhaps pushing the envelope to invest the time in coding practices, coding procedures, data auditing techniques and so forth would improve the integrity of the information.” He added that they would place a higher emphasis on the usability of information for pay for performance, for consumer decision making and for public accountability than they would on scientific and clinical credibility.
Ms. Greenberg updated the group that the NUBC will likely vote on the UB-04 by the November meeting. The current draft does include the indicator for secondary diagnoses, which already is part of the 837-I; if this is a required element, it would have an impact on anyone reporting hospital data. There are some fields that will be on the UB-04 that could be used for the vital signs and the lab values.
Discussion ensued around the priority level of number four. Dr. Haywood clarified that it would help match billing components and Mr. Queram stated that it would enable outcome assessments for physicians performing inpatient procedures. It also eliminates the need to link institutional and professional databases. Ms. Coltin pointed out that professional claims frequently provide only a provider ID that represents a billing group or billing organization. These data are generally proprietary, whereas most hospital datasets are available and can be used in the public good to create these kinds of measures.
Ms. Handrich noticed variety in the testimony regarding functional assessment and its priority. While Ms. Coltin and Dr. Haywood spoke somewhat passionately to the importance of standardized data to inform functional assessments, Mr. Queram and Ms. DiLorenzo did not. Ms. DiLorenzo replied that she felt functional status was more relevant to the elderly population, while Bridges to Excellence is focused on active employees. Mr. Queram stated that they are trying to be selective in their requests and that functional status is not as actionable as the other datasets. Although recognizing its significant value, NUBC does not include functional status, Ms. Greenberg reported, because it is not standardized and would be difficult to implement.
Bridges to Excellence asks the health plans to use national standards, Ms. DiLorenzo reported, adding that a consensus of measures with less variation would greatly help the health plans report. Mr. Queram stated that the Consumer/Purchaser Disclosure Group is formulating ground rules to be used by purchasers, payers and others, for what measures should be adopted and requested for public reporting, pay for performance and other purposes.
Dr. Cohn raised the issue of frequency of collection of several of the data elements and whether certain data is more suitable to claims transactions or attachments. NCQA measures are a snapshot of a period of time, recognized as good for three years. Because they are at a very early stage of determining incentives and standardized quality measures, Ms. Titlow’s organization wants blood pressure on every visit to provide a broader amount of information for better data quality and flexibility. Mr. Queram stated that “our advocacy is to make these added fields mandatory on the UB-04, so it would be submitted every time there was a hospital discharge.”
Discussion developed around the issue of potential analysis problems. Dr. Edinger pointed out that a lab test by four different methods in the same hospital could create a false trend because of different methodology, techniques, and normal ranges. While LOINC can accommodate variation, analysis by a person would still be necessary. Dr. Carr added that some very important quality indicators would be unwieldy to have on the claim form. Getting to this level of granularity is important, stated Mr. Queram, as is limiting the burden. He reported that they have submitted to the NUBC for lab values at admission, not every lab result. Dr. Carr pointed out that, questions as well as data elements must be standardized. Ms. Coltin stated that, in order to maximize value and evaluate quality, it can be necessary to pool data with other health plans
Panel 2 – Quality Measurement Organizations
Jerod Loeb, Ph.D., Joint Commission on Accreditation of Healthcare Organizations
Dr. Loeb began his testimony by commenting, “I really applaud on behalf of the Joint Commission the basic message in this report, the importance of standardization in the measurement field. Standardization is sort of antithetical to our society, but it’s abundantly clear that absent the standardization, our ability to establish appropriate benchmarks, to track and improve performance over time, pay differentially for performance and compare providers is difficult, perhaps even impossible.” He noted that the importance of data collection is becoming a byproduct of the care delivery process rather than a separate activity.
Dr. Loeb’s staff analyzed the report and developed responses in the context of trying to understand how it would impact their existing business processes and impact their vendors. There is a legitimate business need for just about all of the report’s recommendations, he stated, adding that their ICU model needs many variables to calculate risk models. The first recommendation relative to laboratory values is really very important in terms of standardizing, but the extent of the laboratory values is not clear in the report.
Physiological data all are critical in the context of the ICU set. Linkage of the test results or even vital signs to data collection is absolutely essential. Dr. Loeb stated that co morbid conditions are also important ICU risk adjustment variables, so they strongly agree with the recommendation for secondary admission flags. He also expressed approval for required reporting of operating physician, because it would add value to performance measurement for internal QI and external reporting. He advocated collecting the data just once using a standardized set of variables, then delivering it to a board or accrediting body for certification decision making, which would significantly advance the field and probably reduce the push back.
Dr. Loeb expressed concern over dates and times for admission and procedures, because although it is already integral in their process, the existing measure calculations purposefully use arrival date and time rather then admission date and time, which is a general trend in the field. He urged the Workgroup to reexamine it.
In terms of pay for performance, Mr. Loeb stated that the Joint Commission uses data elements to construct measures and not data elements themselves as a unique dataset. It would be generally beneficial if some of the recommended data elements could be derived administratively and not require chart abstraction. His organization does not risk adjust process measures, but he noted that many patient level data elements are necessary to appropriately risk adjust outcome based performance measures.
Dr. Loeb broke out which recommendations were relevant to their risk adjustment. Recommendation one again is critical, but he noted that the actual usefulness will ultimately depend on which values are included in the final set. The recommendations for vital signs, secondary admission, diagnosis flags, and admission (arrival) dates and times all have significant bearing upon JCAHO’s risk adjustment methodologies. He added that standardizing data elements here will go a long way toward reducing some of the push back and risk adjustment technique controversy.
“Data quality should under gird everything we’re doing but there is much variability today in data quality,” stated Dr. Loeb. He added that they believe the Workgroup’s recommendations are sound and will result in data of acceptable or better quality data. If standardization stimulates a migration of data elements from those requiring chart abstractions to the administrative dataset, then these recommendations will significantly enhance general value. He also stated that it is important not to shift general thinking toward only measuring variables available through administrative datasets rather than chart abstraction, which is the only source at this time for some important variables particularly with respect to risk adjustment.
Dr. Loeb encouraged the Workgroup to take the approach, “do I ask the question first, what the right thing to measure is, can I find reliable data, and what about the cost benefits relationship.” He summarized his presentation by saying that they strongly support the basic tenets of all of the candidate recommendations. Additional standardization around data collection and reporting is absolutely essential although there still are significant challenges in the area of implementation.
Peggy O’Kane, National Committee for Quality Assurance
NCQA is a private non-profit that measures, reports and produces information that enables value based purchasing. Ms. O’Kane stated that they emphasize pay for performance and can address the kinds of things that are important to measure that can’t be measured today. NCQA’s public reporting on more than 300 health plans can be used by the public for choice and by providers for quality improvement. She noted that this activity is very expensive and that it takes a lot of effort for people to improve their evaluation results. NCQA’s comprehensive accountability system is used to evaluate quality of care for commercially insured and Medicaid and Medicare enrollees, and has been adapted for measurement of performance by medical groups. It is required by many public and private sector purchasers.
Transparency really works, Ms. O’Kane insisted, and this agenda should be expanded because a central aspect is making comparisons. Payment reform will move this agenda by rewarding superior performance and by changing away from rewarding bad performance in many ways. “We need to think about health as the true north for value, and all of this requires meaningful, sound, and feasible performance measures,” she said, adding that many Medicaid programs are now beginning to look at pay for quality.
Ms. O’Kane stated that the lack of accessible and inexpensive data is a major barrier to measurement, and there are many things that they cannot measure because it is too expensive or too complicated. She noted that at the hospital and physician level, chart review becomes prohibitively expensive, particularly for very sick patients seeing multiple specialists. Electronic medical records will not be more useful than paper records unless they are structured and coded for abstracting quality purposes, she said, adding that free text in electronic medical records is not useful. Administrative data will remain an important source, so it is reasonable to improve it.
In her responses to the Workgroup’s recommendations, Ms. O’Kane suggested that it should be clear which measures will be used for accountability, provider feedback, surveillance or research purposes since not all measures will be useful for all purposes. She combined recommendations one and two, noting that lab results and vital signs can represent important outcomes or risk adjustment parameters. Routine performance measurement and pay for performance activity will increase substantially with routine reporting through administrative claims.
Ms. O’Kane urged the Workgroup first, to concentrate on things that are likely to change as opposed to things that are not likely to change, and second, to clarify on coding forms when added elements are required to be coded during care episodes. She stated that electronic data on test results and vital signs would significantly increase the ability to do clinical measures and risk adjustments at a much lower cost per measure.
For number three, Ms. O’Kane explained that more measures could be calculated with the addition of a few routine data elements on medical claims. For example, calculating the rates of adverse outcomes would be greatly enhanced if diagnoses present on admission were routinely and validly coded. Inclusion of standardized severity designations might aid in the calculation of risk adjustment outcomes or measures of appropriateness, she said. She called number four a very useful recommendation that NCQA very much supports because it makes the accountability picture much clearer.
Number five is in line with the IOM’s identification of timeliness as a key aim, Ms. O’Kane stated. To improve care that requires adherence to certain time intervals, date and time information is essential, and the time intervals most critical for improving clinical outcomes or minimizing patient stress should be identified. She urged the Workgroup to “think about the patient worrying and stress as also an important outcome and an important corollary of care.”
Ms. O’Kane identified number six as a desirable and useful addition to administrative data, and stated that recording standardized start and end dates is advisable for practices where care must be initiated within certain time intervals to achieve desired outcomes. Such capture should not be required for services where time is not of the essence, she added. In number seven and eight, functional status information is an important outcome for many care processes and patient groups that should be captured in a standardized fashion. She noted that many factors contribute to functional status and interpretation of that data as an accountability measure can be very complicated, so it has a crucial framing issue.
Ms. O’Kane concluded that administrative data will remain critical for the measurement of quality in health care for some time. Administrative data can constitute an easily available, reliable and valid data source for the measurement of many aspects of health care quality. This data can further be enhanced through the alteration of current data capture processes. “When these data count for payment, for accreditation, for public reporting, the data get better,” she said.
Three factors that Ms. O’Kane felt would support the adoption of the recommendations were: making clear which additional performance measures could be computed; determining how these measures relate to national, regional, or other important health care quality goals and priorities; and quantifying the health benefit gained by improving the performance in the areas of these newly deployable measures.
Edward Kelley, Ph.D., Agency for Healthcare Research Quality
Dr. Kelley congratulated the Committee and Workgroup on the report. He expressed his gratitude for the support of AHRQ by NCVHS and this workgroup in the development of the National Health Care Quality and Disparities Reports. His perspective on the business need for improvements to administrative data reflects a research and quality agency. He noted two prominent AHRQ areas relating to the hearing: quality indicators AHRQ developed for internal improvement assessments of quality of care at the hospital level, and the National Health Care Quality Report and National Health Care Disparities Report.
The area that is tracked and for which this has the most relevance is the effectiveness area, Dr. Kelley stated, which lists off the different condition areas. He also noted the issue of timeliness and commented that it will be quite some time before electronic medical records will be readily available for quality analysis and quality improvement. Timing of conditions cannot be understood using current administrative data, he asserted. For example, while renal failure may be coded, it is not currently clear if the patient was admitted with the condition or if it originated during the hospital stay. Hospitals also differ in the depth of their coding.
Dr. Kelley grouped the recommendations into three areas with some overlap. He stated that numbers two, three, seven and eight offer improved data and better risk adjustment. Number one offers some important process outcome links although there are concerns about burden. AHRQ has value for effort concerns about numbers two; six, seven and eight, and he shared specific thoughts about numbers one, three, four and five. In general they feel some ambiguity over recommendations that will make important data improvements but have a real spectrum of associated costs.
Number three has high value, Dr. Kelley stated. It will allow providers, purchasers and policy makers to distinguish complications from co morbidities and enhance performance measurement and risk adjustment. This recommendation could be a critical component in a pay for performance environment by enhancing indicator definitions and related risk adjustment, and by providing clarity around complications versus co morbidities in general. He noted that it could be valuable to internal QI efforts, comparative reporting by organizations and the public reporting of provider performance.
AHRQ feels recommendations four and five have a strong business case because implementation should be relatively low cost and will assist in establishing accountability for procedure related outcomes. Number five and part of six address timeliness, Dr. Kelley observed, for which there has been a schism in the views. Patients have valid perceptions of timeliness for things such as getting appointments at the right time. Clinical measures address effectiveness and safety. “This discussion would vastly improve the ability to track timeliness in a much more clinically specific way and would also allow us to take the first step on linking patient and provider perspectives on an extremely important dimension of quality,” he noted, adding that the times issues in number five would help on two measures that AHRQ currently tracks nationally using chart review data.
Dr. Kelley reported that AHRQ is ambivalent about their role in recommending number one, because while this would add tremendously to information available, for national reporting they have a ready source of data in the National Health and Nutrition Examination Survey. He stated that they think it may fall more on the margin in terms of the value and immediate need.
Commenting that the recommendations in the report in general present additional things to be done, Dr. Kelley noted that there is some room for the committee to advocate additional examination of what of the many, many possible clinically related data elements really make sense to add to a standard transaction.
AHRQ QIs are developed for internal quality improvement efforts. Although there is a lot of interest in using these for public reports comparing hospitals, there are a lot of questions in terms of the clinical validity of these datasets. Dr. Kelley stated that they will be evaluating what key clinical data elements can be added to selected measures from the AHRQ QIs and to what extent the use of clinical data in performance measurement adds value relative to the cost of data collection. He explained that this is relevant because the AHRQ QIs are a lens on administrative data in general. The clinical data elements that will be examined include history, vital signs, clinical assessment, procedure reports, lab values, cultures, DNR orders, and times. He commented that vital signs, lab values and times would be especially important in terms of the set of eight recommendations being considered at the hearing.
Dr. Kelley observed that there may be work by other partners that would help the Workgroup hone its discussion across the numerous recommendations. Identification of the most efficient data elements to add will be data driven and statistically determined. He believes that improved standardization and measure alignment would likely be the primary legacy of the Workgroup.
Dr. Cohn thanked Ms. O’Kane for the reminder that EHRs are not a panacea and noted the value of informing EHR developers that free text or incorrect structure can create great barriers. Ms. O’Kane added that NCVHS could make a tremendous contribution by pointing out some of the pitfalls and reviewing some of the experience and trying to send a signal to the vendor community. A mechanism for feedback from customers to vendors might take some kind of institutional intervention, however. Dr. Loeb identified the problem of vendors not integrating developed performance measures from the beginning of their process, which creates enormous technical, political and scientific challenges.
Evidence-based evaluation of the recommendations is a welcome concept, Dr. Cohn noted. He observed that this Workgroup is achieving valuable specificity. Dr. Kelley remarked that they are partnering with Pennsylvania Health Care Cost Containment Council to use their analysis. He stated that it will be nice to have some statistical experience in a small pilot setting to compare with some of the thinking on this report, and it evolved in part because of some of these recommendations.
Ms. Poker observed that in a good system, all relevant data should be captured without becoming a burden, and she hopes that is the goal of this group. In trying to find the right questions, she asked the panel if there are recommendations or suggested quality questions?
Ms. O’Kane asked to come back with suggestions. She recommended the IOM priorities as a good starting point and called any progress in those areas “enormous.” She also urged the committee to recognize that there will not be one permanent, eternal solution, so they should move forward, taking the best set of recommendations given current knowledge.
For some recommendations, Mr. Hungate noted panel feedback that sub-pieces are better than the larger recommendation itself, in specific places the data is more helpful, more disease or condition specific. He commented that if a measure is efficient, it will have a strong business case that says the value exceeds the cost, and he noted that it is probably quite specific. He asked for thoughts on how to get from the current status to that point. In response, Dr. Kelley stated that there is a range in value for the different recommendations, so the Workgroup might in the end recommend those that are of higher value. He asked whether all of the specificity is needed for this effort, or if the next step and the piece that follows is a subgroup getting together on a subset of the recommendations. Mr. Hungate stated that resources are limited and principle statements are much easier than reconciling statements.
Noting that specificity is necessary for comparable data, Ms. Greenberg suggested that the recommendations can create a basic mechanism for reporting this information, without actually saying which vital signs or which tests at what time, etc. She asked Dr. Loeb if groups such as his could then identify exactly what should be collected. He noted two caveats, first a consensus development process involving the right stakeholders, and second would be the notion of being sure that there is a cost benefit analysis that underlies this, with supporting evidence. Ms. Greenberg then asked whether to pursue a recommendation like one and two, recognizing collection of some vital signs, some lab test results, etc., and promoting a mechanism to do so in administrative data. Ms. O’Kane felt they should be more specific, and Dr. Loeb agreed, saying “it points the car in the right direction but it doesn’t move the car and I think that’s a real significant issue.”
Raising the question of who will implement measures, Ms. O’Kane emphasized that this has to be an evolving relationship, that there’s not one set of recommendations that could solve the problem once and for all. Some of these things will have a limited half life.
Discussion developed about physician profiling. Dr. Janes asked for elements to include that would play into that world and strengthen the ability to do profiling. Ms. O’Kane replied, “the issue of measuring individual physician performance is not for sissies.” Their approach has been conservative: it is target-based and involves compliance with standards based on guidelines from ADA, American Heart Association and ACC, with some structural aspects. She called it quite distinct from other kinds of profiling. She discussed plans that successfully provide profile information back to the doctors (see final panelists from this hearing), and suggested thinking about creative ways to take claims data and populate practice management systems. She was not sure that all results of profiling should be publicly reported. She indicated that she would check with staff on particular data elements for profiling.
The categories of volume, practice and outcomes are ways Dr. Carr suggested looking at physician profiling. Measuring adherence to best practice, combined with volumes, would mean tremendous strides forward. Using outcomes leads to challenges, she noted. Ms. O’Kane indicated that they are obtaining volume and adherence data but at massive costs, involving doctors going through paper records. She felt there is a lot of potential to give them better information.
Dr. Kelley then noted that even with physician specific information and data elements you still don’t have the risk adjustment. Using the outcomes is a long term prospect even once there is agreement on the data pieces that seem most statistically efficient and clinically important. Accountability is one of the biggest issues in this area, Dr. Loeb noted. Even with unique physician identifiers, where do you point the finger? “I understand the whole notion is quality improvement, not accountability,” he stated, “but when it means board certification and ultimately perhaps pubic reporting the stakes are very, very high and the tempers are very, very short and you’re right, it isn’t for sissies.”
Mr. Reynolds then asked the panelists for their sense of the value proposition for the individual practitioners as they deal with their vendors and then they deal with everybody else. Dr. Loeb took the idealist position, saying that it is improved health care quality, pure and simple, the ability to track longitudinally how you’re doing as well as compare it with that of others in a standardized manner. Ms. O’Kane noted the extensive and appropriate physician frustration toward accountability, stating that a renegotiation of the whole definition of accountability is needed. She feels that all parties must embrace the agenda and try to figure out a way that to constructively work together towards advancing it or we will continue to live with chaos and waste and unfair judgment.
Dr. Kelley observed that a pay for performance environment has to be in place for the actual dollar side of that value equation. He warned that wanting to be as good as the best only gets you so far. It is necessary to think hard about the value question and where it ends up and what are some of the remaining barriers that this will not solve, and be honest about how it gets solved.
Panel 3 – Provider Organizations
Nancy Foster, American Hospital Association
In the process of thinking about improving health care quality and health care quality measures, Ms. Foster asked the Workgroup to remember that hospitals are a diverse group of organizations with very different capacities to accommodate changes. All are committed to serving their communities and believe that quality measurement will help them.
Ms. Foster explained that hospitals think there is an inordinate amount of quality measurement taking place, though not necessarily a lot of quality information available. AHA sees the struggle to find a way to communicate good information—not data—to the people who want to use it to effectively improve care. Quality improvement cannot happen without a data source and the context in which to understand that data, she stated, adding that they struggle because the data themselves don’t come with a context, so it must be created.
AHA wants to work with other organizations to provide information to the public and providers that will help them to improve quality and dramatically reduce the cacophony that exists around measurement, Ms. Foster reported. AHA has started with ten measures as a way of building a data path towards getting information out to the public, and she called it a learning process.
Ms. Foster agreed with previous panelists that standardization of data and measures are essential, reducing the burden of collection and moving things forward. She called the standardization reflected in this report “incredibly valuable” and gave AHA’s full support to the committee’s efforts. Standardized methodologies for data collection are also crucial, she added, “the very nitty gritty level of the person in each hospital or in each organization having a common understanding of what they’re supposed to do, how to do it, and doing it that way in order to build up to where you have data that you can compare and general real information. It is not an easy task.”
Ms. Foster observed that it is valuable to imbed certain information in the uniform bill only if there is a common set of understanding of priorities and what should be measured. Once that is agreed, then drill down to the detailed level of each of the specifications that needs to be gathered in order to effectively measure.
Ms. Foster requested the Workgroup’s help in making the jump from IOM national priority areas to knowing what to measure about each of those priorities. It is then possible to find effective measures and begin to talk about what data elements are needed in order to collect the information to create the answers to those measures. She called it a very detailed process.
The report asserted that outcomes measures are the most valuable for consumers, but Ms. Foster feels that question is really still unanswered. She pointed to studies showing that when consumers look at outcomes that have been risk adjusted, they read that as whoever is collecting the data having fixed the data so it comes out the way they want, not having risk adjusted it in the way that this group would understand. There are some very good process measures that may be as effective in communicating to the public as outcomes measures, she stated.
In relation to collecting the specific provider/physician identifier, the AHA understands quality to be a systems property, not an individual provider property, Ms. Foster cautioned. She told the story of a recent Life Magazine photo showing the 99 hospital employees who affected one patient over one three-day hospital stay. Systems properties are very, very important and by proposing to imbed an individual physician identifier, she warned that the Workgroup may scare and mislead people about individual rather than system accountability.
Ms. Foster called the flag to identify the conditions present on admission a great idea and extremely helpful. The problem is lack of consensus on identifying as present on admission for some conditions, she stated, giving the example of pneumonia, how it is defined and the potential timeline issues. Having a flag when it is not clear what to flag is not helpful. She stated that the same may be true around service dates and times, adding that it is fairly rigorous and time consuming to collect that information, especially if it has to be abstracted from medical records.
Most of the 100 to 125 bed hospitals that work with AHA have at least one full time employee dedicated to collecting data, and that is a lot of investment, Ms. Foster observed. Hospitals have said that they see the value in their own internal quality improvement work and in collaborating with other organizations or collecting data similarly. They do not see the value in publicly sharing the information, because there has not been a response from the public.
Ms. Foster warned the Workgroup to think carefully before imposing more data collection and be sure of what it is, what the value is, to whom that value is, and how you make sure that the people who incur the burden at least get some sort of recognition for that burden and response.
Barbara Paul, M.D., Beverly Enterprises, Inc.
Dr. Paul offered a long term care perspective on the “incredible value of standardization to propelling health care quality,” and emphasized the value of integrating data collection into the clinical flow of processes. Beverly Enterprises measures quality using a very detailed scorecard that is heavily weighted toward clinical quality, she explained. They align incentives toward higher quality very, very explicitly.
When talking about hospital quality and costs it is important to look at long term care quality, Dr. Paul explained, because hospital and long term care patients overlap heavily and there are frequent transitions between these settings. Transitions frequently cause health problems, so one key objective needs to be to find ways to help long term care improve in quality, which standardization should do. She noted that oversight of long term care has produced a lot of standardized valid data, the MDS and Oasis datasets, that is standardized, transmitted to a central repository, validated, cleaned and studied. It is much more clinically rich than current hospital data. She also reported that nascent electronic health records exist in long term care in pieces, with the MDS dataset a module and physician order entry another piece.
Dr. Paul was very positive about government and other organizations adding new approaches to quality in the long term care area. It is good, she said, to “chase some carrots as opposed to always just kind of running away from the sticks.”
For recommendation one, Dr. Paul agreed with the need for standardized laboratory result reporting, noting that they we need it to simply provide care for patients. They need it in real time; administrative claims data is too late for their use, she explained. “Real time laboratory data for admissions would make a huge difference with one stroke in terms of the quality of the care we could provide tomorrow,” she said, urging the Workgroup to do more than simply recommend that this go on the UB-04 or UB-05.
Regarding pay for performance, Dr. Paul’s organization operates now under a kind of negative pay for performance environment. If they fall short they are subject to fines, withholding of Medicare payments and so forth. She stated that this information would allow them to avoid that situation and would also help them pursue that positive pay for performance in their sector of health care.
Dr. Paul noted that recommendation one would obviously help with risk adjustment as well as acuity assessment as they are accepting patients from hospitals. She stated that mode of collection also needs one standard dataset that is created and maintained by the lab provider, standard nomenclature, sent in a standardized transaction, asynchronous with the billing form. “I’d go so far as to suggest that Medicare should mandate that if a lab provider wants to be paid that they should have to provide it in this format,” she added. Recommendation one is the greatest need for the long term care providers, and she also concurs with recommendations two, seven and eight.
Dr. Paul believes the recommendations can be implemented independently, and the continuity of care record (CCR) effort contains the shell of what is needed. It would be a way for all involved to move together fairly quickly in this regard, she stated.
For mode of collection, Dr. Paul thinks that the one real benefit they could offer back to the hospitals would be that the MDS functional data elements might be a useful starting point for a standardized set of functional elements. She added that, for really effective use of this information, it needs to be asynchronous and in real time.
Dr. Paul addressed MDS and SNOMED: because SNOMED lacks functional status elements in the area of ADLs, she would recommend integrating SNOMED into the next version of MDS and then use the current MDS data elements for the functional element gaps. She noted that the CCR effort also needs some functional status elements added.
In conclusion, Dr. Paul observed that long term care patients are hospital patients, and she is very excited about this standardization because not only would it help them improve their quality, it would actually help them provide the quality care they want to be providing tomorrow. She stated that they really are committed to the end result of the Workgroup’s efforts: transparency, accountability, and the real time communication of information that this would afford.
Stanley Hochberg, M.D., Provider Service Network
Dr. Hochberg is the medical director for an organization of six hospitals and their physicians for managed care contracting. They build data systems and maintain all payer data warehouses and other systems to support the management. “One of our mantras is that data needs to be aggregated,” he stated.
In the past 20-30 years physicians have been taught to ignore data, Dr. Hochberg believes, because most of the data they’re seeing from administrative databases is not valid. He explained that if pay for performance models are going to improve, it is also necessary to bring physicians back into valid data. He outlined other problems with data: it often profiles past performance, it has minimal clinical depth, and most physicians don’t find administrative HEDIS measures particularly compelling. For instance, “to physicians, it’s not whether you did the glycol hemoglobin test for diabetes, it’s how you controlled it,” he said.
Regarding the Workgroup’s recommendations: Dr. Hochberg gave selected laboratory “a double thumbs up,” because it clearly would improve HEDIS measures and give administrative data measurement more clinical depth. The pay for performance contract models would also be better, he stated. Theoretically, it should also improve the accuracy of risk adjustment models. For high risk members in need of outreach, providing the information physicians need to decide to whom to reach out would increase the buy-in for claims based measurement.
Dr. Hochberg argued that on the down side, laboratory values would dramatically increase the volume of data captured and current systems would need significant time and effort to prepare for this, even the risk adjusted models would have to be reevaluated and retooled. To allow time to upgrade, he encouraged the Workgroup to consider the gradual phase-in of measures according to what is most critical, instead of putting all of them in at one time.
Dr. Hochberg feels that the costs to implement number one are significant but not unreasonable, and it is more of a time issue than a magnitude of investment issue, with the overall balance of value versus cost being favorable. The phasing and introduction of this are critical to its success, he noted.
For the second recommendation, Dr. Hochberg believes that collection for reporting will be burdensome, particularly without electronic systems. He noted that a lot of background work is needed to make this happen in an automated way, and questions would be raised about reliability of measurements such as blood pressure. He proposed that one option is to defer number two until clinical care IT systems improve and this is captured and fed down to claims more easily. Number two relates to a smaller subset, so although it would provide valuable benchmark information, the value versus cost is not as favorable as the first recommendation.
A diagnosis modifier or flag for present at admission would assist in risk stratification of cases, improve current quality assessment efforts and support more accurate measurements from claims databases, Dr. Hochberg explained. The overall cost to implement to institutions is probably low, and the value moderate, so it is probably something that should go ahead. He referred to this as “low hanging fruit,” the phrase that would be carried through the rest of the Workgroup’s discussion of the recommendations.
Dr. Hochberg stated that operating physician identifier code would dramatically improve the ability to monitor and profile individual surgeons by outside agencies or payers. This would benefit external agencies, he added, but problems with adequate sample sizes and risk adjustments are likely, particularly if claims are not aggregated across different payers. He is not sure of the usefulness of identifying the surgeon without improved outcomes measurement. He addressed another critical piece: this would allow tiering of individual surgeons under pay for performance models. He noted that this would “scare some of my constituents to death, there’d be a very high bar for statistical validity.” He also addressed the issue of evaluation from the surgeon level or the institution level, and noted that information on individual surgeons is already accessible through many institutions’ robust internal QI programs.
With dates and times for admissions and procedures, Dr. Hochberg had some accuracy concerns. He felt the utility is limited to selected interventions in which outcomes are clearly tied to the repetitive of interventions. Evaluation of this would involve collecting confounding variables, and he noted that one would be in essence adjusting cases to interpret the data.
Dr. Hochberg felt that there is not a very big universe or utility for episodes start and end dates, and was again concerned about probable accuracy. He stated that large scale collection of functional status codes at any reasonable cost is probably contingent on electronic health record adoption. There is no economic model right now for most individual practices to make this investment, he added.
Beyond the adoption of electronic health records, Dr. Hochberg believes that standardized assessment and data input are needed, and there is no standardization amongst the current vendors. He thinks that would require some kind of government action, so is out of scope in the near term.
Vahe Kazandijian, Ph.D., M.P.H., Maryland Hospital Association
Dr. Kazandijian was interested in bringing some academic rigor to the discussion. He asked whether there is a business need for this report. While he called it very timely and thoughtful, he challenged the angle taken on some issues.
Measuring performance obviously is of importance. Dr. Kazandijian emphasized the distinction between measuring quality, which does not work, and measuring performance, which is a value free concept. He observed that the concept of benchmarking becomes important in pay for performance and the distinction between incentives and rewards becomes very important. “Do you provide incentives for those who are not doing well or do you reward only those who do well,” he asked, and does this increase the gap?
Dr. Kazandijian stated that the transformation of performance to quality through evaluation creates the challenge with some of the measures because it puts a value on a value free measurement. In his example, “if you say waiting time of 18 hours is okay then it becomes okay, if you say it’s not it’s not, but you measure waiting time the same way in both situations.” Whoever puts the value on that measured performance makes a difference in how it is adopted, he said, and that is an issue with all of the recommendations.
Dr. Kazandijian raised the question of ongoing monitoring. He noted that when a process becomes part of the fabric of life, it is different from a project with an end date; it has a shift in mentality. He stated that the continuous approach is also important from a quantitative point of view because the necessary process adjustments will need long term ongoing monitoring.
Adjustment in general is the case to be made here, Dr. Kazandijian argued, and he believes that all of the measures would benefit from that. But he applied the term risk adjustment or clinical adjustment as the only type of adjustment and included into that epidemiological stratification as well. He stated the scientific need to uncover and discover and improve on this, so it is important to have the long term commitment of monitoring because measures should be able to accommodate those scientific changes and novelties in processes. He felt that all of the recommendations could accommodate the scientific changes except for the physician identifier, which he referred to as “relatively controversial to everybody.”
We are at crossroads on health status, Dr. Kazandijian stated, noting that this may be a wonderful opportunity with the recommendations, support and guidance, to take it to a level that people have shied away from. He proposed that perhaps health status is the ultimate outcome when it comes to public understanding of quality. Health status is what should change if the physician did well. He considered that perhaps health status should be looked at and promoted beyond the hospital and nursing home. The most frequently done high cost procedures in the U.S. are mostly elective and include a discussion between the provider and the recipient as to why they are doing that. And functional status in that situation is the ultimate outcome, he noted. Health status in itself should be part of accountability and true outcomes.
“I couldn’t resist discussing health status because thinking a bit out of the traditional circle may also benefit the cause and for me that is really the ultimate outcome,” Dr. Kazandijian remarked.
Dr. Kazandijian related that, in the mid 1980s when they started the quality indicator project, one challenge was describing the actual nature of the indicators, of the measures. The best way they found was to call the indicators pointer dogs. He described a good indicator as a valid dog that points to a pheasant and not to rabbit when you are hunting pheasant. However, “I can give you the best dogs in the world and you will never get the pheasant if you’re not trained how to interpret that pointing or to know how to shoot,” he stated, concluding that the real success is in training the user, not in giving people the best dogs. Which brought him back to asking: how does it translate to the education? How does it have a parallel program for the users?
Functional status was the initial focus of discussion with the panelists from provider organizations. Ms. Greenberg’s sense from the panelists was to do more research and identify standardized ways of capturing functioning and to be able to say comparable things about health status or functioning. She noted the panelists’ recognition that outcomes may make the most sense to consumers and ultimately purchasers, adding that it also is valuable to understand the direct processes and best practices, because people have to be health care advocates for themselves and their families. She also concurred that information is needed in real time and that putting it on administrative transactions will not achieve that.
Some study has been done on functional status, Ms. Greenberg noted, but was inconclusive, which is why no standard for functioning and disability has been adopted. Dr. Cohn pointed out that a 2003-2004 CHI study said there is “fuzziness in this whole area, we don’t know what questions to ask much less what answers to expect.” Without this one cannot determine the code set or define what it ought to be doing. He stated that because of this it seems very premature to create a mechanism for recording functional status. Ms. Greenberg replied that translating existing free text information into standardized coding is partly behind the recommendation, but added that the standardized coding has not been agreed to.
Seven and eight were intended to be linear activities, Ms. Colton stated, suggesting that research could begin during the lengthy processes of adding and implementing new data elements.
Discussion moved to existing functional status standards specific to communities such as rehab and long term care. Mr. Hungate expressed concern that the Workgroup is trying to encompass everything, which will mean that groups previously able to use their own measurements will have to wait for standardization across the board. Dr. Paul noted that in the nursing home setting, they are working with CMS and ASPE and others to incorporate SNOMED with MDS, to take SNOMED-CT to make it better for functional status and address variation of terms. To get more depth, Dr. Cohn suggested that the Workgroup request a briefing from people in charge of this area. He described the confusion caused by the many similar questions on different forms and whether to leave them or try to standardize them or their answers.
The science for measuring functional status for the non-institutionalized population is not there, Ms. Handrich observed. Referring to the people who go in and out of hospitals and may have short term stays in nursing homes, she suggested that this area be identified as a priority and pursued with some endorsement of this committee and others. Dr. Kazandijian reminded the group of the issue of continuum of care and the concept of linking the different sites to different activities, to different outcomes.
Although the morning’s presenters emphasized the tremendous opportunities that are possible with quality, the afternoon presentations warned of the great risk if movement is made in the wrong direction or without careful thought. Dr. Carr asked panelists to identify the biggest potential mistakes the Workgroup might make.
“One huge danger is that if you identify the wrong thing to measure you are going to push people towards doing the wrong thing clinically,” Ms. Foster replied. Dr. Hochberg noted that all of the focus on measurement also has to create the business case or economic underpinning for institutions and practices to make the investment to success. Focusing only on system concepts is a danger, Dr. Kazandijian warned, noting that health care is still delivered to individuals by people.
Mr. Hungate asked panelists what data is used in projects they have joined to improve quality. Dr. Kazandijian explained that in many of their projects, they design the measures and data elements with the participants, because existing databases could not provide answers. For government projects, Ms. Foster replied, the data is hospital chart abstraction requiring between 25 and 45 minutes per record. Dr. Hochberg stated that everything for him devolves back to HEDIS measures because all ambulatory material is claims based and all pay for performance programs are from administrative claims.
Part of the agenda is identifying what is efficiently done by less expensive administrative information instead of relatively expensive chart abstraction. Dr. Cohn proposed and Mr. Hungate agreed to identifying recommendations that are obviously low hanging fruit (costing little but having value) about which the Workgroup can say, go forward now. He observed that knowing the cost benefit of other recommendations will take more time. Mr. Hungate defined ideal low hanging fruit as benefiting patients, providers and payers. Ms. Coltin suggested also focus on serving multiple goals.
For the next step, Mr. Hungate proposed lab value measurement, where it is cost effective to do it and where providers would believe it was of value to them. Mr. Reynolds cautioned that they are trying to jump to an answer or jump to how many fields can be used and where would they go and what would they be. He observed that if there is some low hanging fruit, discipline is still needed for one or two more steps to make sure that whatever is approved fits in and can be captured in the current structure. Mr. Hungate expressed his need for a starting point to move from the nebulous to the specific.
Regarding costs and benefits, Dr. Carr raised the question of where this information resides, pointing out that there must be a recipient place or places to close the loop on the cost/benefit. Ms. Foster encouraged considering in each case to whom or to which parties, because it is likely to be of cost or of value to more than one. She added, “If you don’t have a pathway of getting the information back to those who would find it of value, then you’ve missed the boat, too.” Dr. Hochberg pointed out that making these standards will drive hospital system investment and other investments. The data will be pulled for reporting and they will be incanted once released.
Mr. Hungate began the morning session by stating that their primary mission really ought to be around health, not health care, and called this process “a visit to the emergency room by the health care measurement system.” His goal for the end of the session is output that goes forward; a tentative conclusion of some sort.
If this group is able to offer any conclusions or summaries, they should be reported at the NUBC’s meeting in August, Ms. Greenberg stated, to be considered in November when the NUBC votes on the final UB04. Although the NUBC chairs will attend the Workgroup’s September meeting, Mr. Hungate felt that a hearing summary should be provided sooner if the group was comfortable with it as an accurate characterization.
Ms. Greenberg stated that, despite differences, panelists seemed to generally support the fact that some additional information could be added to administrative data to assist with quality assurance, and that this was a route that should be pursued. Dr. Cohn cautioned that it is “very premature to start making conclusions based on a panel of those that we have self-selected as people who most want the data.” Mr. Hungate noted that the describing of the business case will leave some unanswered questions which will only be covered by that next step of information. He then asked all participants to share their assessment of the previous day’s panels.
The quality measurement organizations want as much of everything as possible, Ms. Handrich observed, adding that Mr. Queram’s praise was tepid, because the recommendations of his group are much more specific. From purchasers and providers, she heard constant concern about the need for more specificity. She would be comfortable going forward only with recommendations three and seven at this point; all of the others had concerns and caveats.
Major fundamental questions haven’t been answered, according to Dr. Carr, such as how quality is defined in the public domain. She described the many means of capturing data and the challenges of each, and reviewed questions relating to lab tests and vital sign specifics. Focusing on how you get the data element in may be the wrong way, she argued, emphasizing developing fields within the EHR that are the actual critical quality elements, not surrogates, properly defined and easily capturable for electronic transmission.
Dr. Carr agreed with implementing recommendation three and felt the committee could support creating a definition of the performance status. This has been a negative study, she stated, and all recommendations other than 3 and 7 should be retracted. Instead NCVHS should fill the gap of that interface with the electronic health record. The panelists who would have to implement the recommendations she described as “speechless at what was being put forth, and very concerned that there was not a clear articulation of how we would be better off.” Cost/benefit evaluation, she stated, must always include how a suggestion will improve quality, its cost in time and reconfiguration, the durability of the changes and flexibility for future evolution. She is in favor of retracting recommendations for which they do not have a clear understanding of the improvement it will provide.
Mr. Reynolds was in agreement, observing that quality data is already flowing from hospitals to states without any administrative process. He noted that the AHA is worried about adoption, cost, and how and where you get the data. If it is made part of the financial transaction, errors will interrupt the smooth flow of payments, which is one of the focuses of HIPAA. Using the EHR to capture data puts the data in an accessible situation, he stated. Current administrative data stay within a structured environment rather than being shared and thus can be misleading.
The variety in testimony was reinforced by Dr. Cohn. Quality versus cost and value is one of the fundamental conundrums, and he was very pleased that AHRQ was looking at these specific issues. He stated that the workgroup or full committee needs to potentially support, study, and track and, as results come back, begin to incorporate AHRQ’s work into NCVHS thinking. He agreed that 3 and 7 are very low hanging fruit. In functional status, he would like something that tells us what we need to capture, beyond just a terminology selection issue. There is a need to understand what functional status is, how it could be used, the business case. That should be pushed forward, he felt. He viewed some of the other items as mid-term values that need AHRQ’s cost and value equation. The long-term piece is the EHR, he noted.
Discussion then shifted to development of the EHR. Mr. Hungate asked what has to happen now to make sure that the EHR will do the things we want. Starting from first principles with a strong quality focus is essential to getting quality information out, Dr. Cohn explained. The selection of discrete data elements largely determines your results. The difference between coded data versus something lost in free text is the difference between efficient capture and use of the data, versus back to chart abstraction. Dr. Steinwachs pointed out the issue of varying interpretation. Mr. Hungate called the EHR “a deliverable from our total work, made clear and articulated.”
Returning to the participants’ assessments, Dr. Edinger agreed with Dr. Cohn that the EHR would help, but after 25 years of waiting, he feels they need to do something in the intermediate time and the EHR will not work well if the elements are developed and added retrospectively. His agency and others working on the EHR need help and guidance in what elements and which conditions. Making a business case that addressed the differing viewpoints of the provider, the insurer and the regulatory agencies is difficult, he observed. He is not sure a good case has been made to hospital administrators and financial people, and consumer input may also be needed. He concluded that they should attack the flow and work longer on some of the other issues.
Slightly more optimistic reactions were heard by Dr. Holmes. She stated that thinking they were about defining quality measures is a misunderstanding of the focus of this Committee and Workgroup. The purpose of this Workgroup on Quality and the recommendations is to identify the data elements needed to support quality measurement, she asserted, not to say which lab values to add and what conditions to focus on. She stated that they need to suggest a placeholder for data on lab values and vital signs in a standard transaction form. And then other entities that are involved in quality measurement can define what should go into the placeholder. She noted that Dr. Hochberg was very positive about the addition of lab values and vital signs to support quality measurement for providers, because it would enhance the ability to risk adjust and provide true comparative data on physician performance.
Ms. Greenberg stated that she was “stunned” by Dr. Carr’s suggestion of retracting the majority of the recommendations. She thinks this would be devastating to the quality improvement world and damaging to the Workgroup’s credibility, observing that they have heard so many of these concerns over the last years and have been working toward the electronic health record since at least 1959. People are not prepared to wait for universal EHRs and the purchasing community is not going to put up with this. She stated that they are paying for health care and this is an opportunity to get some really intelligent thinking. Some data can be worse than no data and data can be dangerous, she agreed, but payers are very frustrated not to be getting any information that is really going to help them, their consumers and their customers.
Ms. Greenberg refocused on the role of the National Committee. It is not to agree on the right quality measures, but to push for standardization. Not having that is devastating for burden and cost and does not result in useable data. People will do pay-for-performance and collect information, and she believes the Workgroup can do something important on standardization. She agrees on moving ahead with three and seven, adding that if we cannot share administrative data, we will not share electronic health record data either. These are the same issue of finding mechanisms for sharing data without compromising privacy or the solvency of organizations. She reiterated that this is not the group to say what data to collect, but it could support a process that names the four lab values that make obvious sense and maybe a few others in pilot tests, some of which are already going on. She concluded that telling purchasers that “we’re just waiting for the electronic health record, we are retracting all this, does not encourage them to work with us or the community in a sensible way.”
The IOM recommendations state that patient safety is synonymous with quality, Ms. Poker said. Her understanding is that the goal of this group is to support quality, and that quality should drive the EHR and not vice versa. She believed she heard panelists accepting most of the recommendations with some very important and valuable caveats. She noted in particular that both outcomes and process should be evaluated. She said that she could not identify what were the lowest hanging fruit. With the long-term care population the ones who are hospitalized, she felt that may be a good place to start to identify the data elements they would need.
Dr. Steinwachs stated that the group must consider “what we are doing to make it possible to capture relevant information across that spectrum that goes from our concerns with obesity in America, to not killing people in the process of health care.” Many of the recommendations are not mandating data capture, but saying it must be built into the structures so that capture is possible. He has found very useful the IOM report on the 20 “conditions” for quality measure. He is most bothered that the business case is not the patient’s perspective. He would like the Workgroup to establish the business case for talking to key decision-makers about their interests. Also, he feels a responsibility to facilitating the needs of patients. About EHRs, he has long assumed that administrative systems would share clinical information, in an interoperable form, before the EHRs. He believes that making it possible to capture key elements begins to build a structure where a patient could have a summary of their electronic data. Progress in admin-iterative data may actually help push communication and coordination of care, he feels.
Mr. Reynolds feels the group should move to recommendations but not focus only on the 837, which limits the amount of true care information that flows for purposes of financing. Attachments about specific situations provide more data to explain a situation and present the quality data about it. He noted that this group will not direct where that goes and added that purchasers are not necessarily coming through standard health care channels to deal with quality. The perspective is, I am paying and I am going to make a difference.
This is why number three is obvious, Ms. Greenberg added, because it is the one that makes sense to put on the claim where the diagnoses are already reported, and it already is in the standard. Dr. Cohn reiterated that the role of this Workgroup is data development activities, not quality indicators. He stated that the group needs to understand tools such as claims attachments. As the group discovers the tools, perspectives on what is low hanging fruit, what is easy or hard, may change. Part of the role of the Workgroup is to point out good things, he remarked, to write a letter supporting that, as opposed to having to invent it.
Ms. Handrich shared NCQA’s statement, “The rationale for the adoption of recommendations could further be enhanced if it is clear which additional performance measures could be computed with the inclusion of such parameters; if it is determined how these measures relate to national, regional, or other important health care quality goals; and if the health benefit gained by improving the importance in areas for which these newly deployable performance measures is quantified.”
The Workgroup’s unique position is to look across this system, Mr. Hungate stated, and recognize the tensions between the purchaser view that they have no performance measures, and the provider view that they have too many. He defined the task as working on reducing the disconnect—saying that the group is supporting the collection of valid information with meaning to provider, patient and purchaser, that is useful to all in evidence-based ways and that stands the test of a broad system.
Mr. Hungate expressed his frustration at feeling good about a recommendation which is “not really very much.” He proposed that the group could articulate the spectrum of specific measures, articulate that NQF and AHRQ will determine those measures, and say that there is promise in these measures if there are mechanisms, whether it is claims, claims attachment, the continuity of care record, or a new quality transaction.
Dr. Carr named herself “devil’s advocate,” asking what the group’s vision is and whether they are asking people to do a quick scramble for something that is asynchronous with the electronic health record. She believes that these recommendations are naive to the resources required, that people do not have date and time for these details. By including certain data items, she asserted, it opens that highway to whatever vendor wants to require them.
To diminish discordance between the purchaser and provider community, Mr. Hungate stated that the group must find ways to make progress in measurement to help do that. This is not specifying that a particular measurement needs to be done, but an attempt to articulate how we make progress.
Leap Frog and payers and GE have made huge progress in substantive areas, explained Dr. Carr, with recognition on a Web page, and as people strive for that, processes are getting better. She noted that they are not constrained by how to get some data element; it’s a thoughtful process that takes all of the recommendations and delivers something that people believe in. Quality is being moved ahead rapidly by that initiative, she asserted, and she believes that the EHR has more traction today than ever before. Because of that, she feels the group must be flexible, step back and realize a decade has passed and it is in a different place. She proposes working toward accelerating a quality claims approach that will be quantitative, “but that people won’t build that structure once in their institutional experience.” She feels it will take huge amounts of work to capture the Hemoglobin A1cs in administrative records, and at the end of it there will not be an electronic health record.
Mr. Hungate responded that the size of that huge work must be determined, which AHRQ will be doing. This group needs to help structure that discussion so that the right decisions are made.
Dr. Carr also reported that although some hospitals say Leap Frog’s process is too hard, testimony yesterday was that just filling out the form in Bridges to Excellence made places better by organizing, structuring, focusing, and providing evidence-based. She recommended engaging more people in Leap Frog, rather than creating a field on the claims form for A1c. Dr. Holmes reminded the group that Leap Frog and Bridges to Excellence exclude large portions of the population and are very time consuming.
Panel 4 – Health Plans and Insurers
Kathryn Coltin, Harvard Pilgrim Health Care
Harvard Pilgrim Health Care serves almost 800,000 managed care enrollees in four New England States, Ms. Coltin reported. Providers’ administrative data in the form of claims or encounter transactions, combined with enrollment transaction data, are used to support virtually every business function that distinguishes a managed care organization, she stated.
Ms. Coltin praised the data elements proposed by the Workgroup, saying that they would enhance the efficiency and effectiveness of operations in several critical business areas. These areas are quality measurement and oversight, medical management and clinical programs including population health management and disease management, financial planning and analysis, product development and marketing, and provider contracting and reimbursement. She went on to describe in detail the potential benefits of the data elements for each area.
The addition of availability of certain lab test results and vital signs would be beneficial in numerous areas, Ms. Coltin explained. It would help health plans target members and physician groups for quality improvement interventions. Improvements could prevent secondary complications and their associated costs, and decrease the risk of mortality. She noted that similar benefits could be realized in health conditions such as obesity where measures have not been implemented due to the unavailability of laboratory data or vital signs needed to identify the target population for measurement.
To identify members at high risk of future health crises, Harvard Pilgrim uses predictive modeling software that has resulted in cost savings by targeting case management services to high risk members, and averting health crises. Ms. Coltin remarked that this has also enhanced member satisfaction. Although lab values, vital signs and functional status could potentially enhance the predictive value of these tools, she warned that the added value needs to be balanced against the cost of collecting these data. Ms. Coltin also cautioned that attributing accountability to any individual care giver is risky, as systems also exert an important influence.
HEDIS measures are used extensively for oversight, accreditation and performance incentives throughout New England, Ms. Coltin reported, adding that the cost of measuring performance is prohibitive and burdensome to both the plan and the provider offices even when based on limited random samples. The availability of lab results and selected vital signs in administrative transactions would greatly enhance plans’ ability to implement pay-for-performance incentives tied to clinical outcomes, she stated.
Ms. Coltin explained that the recommended data elements that would be most helpful for risk adjustment vary somewhat by care setting, and that she has high confidence in data quality collected for almost all data except for arrival and procedure times.
Ms. Coltin ordered the recommendations as 1, 3, 2, 4, 5, 7, 8, and 6. She found correlation between those that pertain to hospital transactions: 1, 2, 3, 4, 5, and 8, and measures that pertain to physician transactions: 1, 2, 6, and 8. When talking about data capture, setting is a very important factor, she emphasized, because where and how the data are collected will vary.
“I don’t believe that there is a one size fits all solution for implementing recommended data elements in a standard HIPAA transaction,” stated Ms. Coltin. The vast majority of health care is delivered in physician offices and clinics, she added, where quality measurement is extremely limited and public reporting of these measures is voluntary. Health plans are in a strong position to shed light on the comparative performance of a broader and more representative segment of the physician community, but can only base their measures on administrative data.
Ms. Coltin concluded that the adoption of EHRs will enable more accurate and less burdensome transfer of electronic data from EHRs to claims transactions. However, it may not improve the completeness of the data. She related Harvard Pilgrim’s experience that electronic encounter transactions from one very large practice are less complete on secondary diagnoses than other groups’ paper submissions. Providers tend to enter data only about the problem they are focusing on, she observed, which can lead to incomplete claims and a distorted assessment of the case mix and severity levels of their patients. It can disadvantage providers in the calculation of measurements that require risk adjustment.
Jeff Kamil, M.D., Blue Cross of California
With pay-for-performance in place in Blue Cross’s HMO program for about six years, Dr. Kamil explained that physician groups can now obtain from pay-for-performance approximately 10 percent of what one would call capitation in the HMO program. He noted that it takes a strong incentive for physicians to really work for pay-for-performance.
Dr. Kamil believes that all of the Workgroup’s questions really fall back on the business case, and that there is a strong business case for quality measurement and for pay-for-performance, which depends on those measures. He described how, in California, five health plans with 26,000 physicians have come together and agreed upon standard quality measurements. These medical groups can actually compare each other’s performance. Independent practice associations (IPAs) can also compete with integrated medical groups on any quality measure. Dr. Kamil stated that enabling IPAs to purchase systems to acquire and compute health care data and transfer that data for their own independent measurement and comparison is essential. He also noted that physicians are investing more money than ever in systems to improve care, in part due to electronic medical records on an ambulatory basis. The pay-for-performance programs create necessary incentives for physicians to invest in their own practices and change their work flow.
None of this investment would have occurred without pay-for-performance programs, which are dependent upon having quality measures, Dr. Kamil asserted, adding that quality measures are dependent upon having valid, measurable items that can be compared across provider groups.
In Blue Cross PPO programs, administrative data has been used to collect approximately 12 quality measures to provide independent physicians with their comparative scores across their specialty or geographic area. Dr. Kamil noted the difficulty in reaching individual physicians about quality measures. They are not used to looking at data and comparing themselves, he observed, and information must put it in front of them because they are bombarded with it.
“We have delivered checks to individual physicians of up to $5,000 as a bonus, and the physicians are surprised and pleased, because they aren’t used to getting bonuses from health plans and PPOs based on quality,” he stated.
Regarding new measures, Dr. Kamil reported that providers are saying there are already enough measures, while purchasers are likely to say that they don’t have enough measures. Health plans, however, have not fully utilized and deployed existing measures. He believes that improvements to systems of care can only occur through the acquisition of data registries, which are dependent to some extent on electronic health records, as is the long-term success of pay-for-performance systems for individual physicians.
Dr. Kamil explained that it is important to develop quality measures that go beyond HEDIS because there are other important factors that can be used in managing health care. A physician’s ability even to order lead levels and to react to inappropriate lead levels in children is an example of one measure for which having lab values would be important.
Of the Workgroup’s recommendations, Dr. Kamil felt that 1, 2, 3, 4 were the most important, in the order they were listed. He also emphasized that health status needs to be reported independently by patients. Blue Cross would use laboratory data for its expensive disease state management programs, which focus on patients who really need care, and work with physicians who can improve their care through the use of laboratory data. They would also use laboratory values for predictive modeling and to risk adjust payments to physicians and medical groups. In general, he noted, having lab values in hand and making them accessible online would also help health plans work with physicians who do not have electronic health records.
“The other business case for this data is the mere fact of improving relationships between health plans and providers,” stated Dr. Kamil. “Having quality measures to talk to physicians about is a common ground where health plans and providers can come together to determine how to improve health care, and paying for performance actually creates that common ground.” He believes the two most important things to be developed are a common physician identifier and a common patient identifier, which would make a lot of other areas more manageable.
The recommended data would also be used in developing provider networks and for systems to measure hospital quality. Dr. Kamil noted that hospitals are very complex and take care of many different types of patients, who would benefit from more data on procedure types and outcomes. He added that hospitals are very interested in comparing themselves against value data and emphasized the importance of hospitals agreeing upon measures that would really differentiate them in terms of quality, so that health plans could differentiate them based on price. Quality measures may actually lower health costs with both hospitals and physicians, he observed.
Dr. Steinwachs asked what factors Ms. Coltin and Dr. Kamil see that would help make the business case more broadly in the positive ways in which they identified it. Ms. Coltin emphasized collaboration to make data much more valuable. She also reinforced the necessity of getting providers’ attention and providing value back to them. “We use one format, one measurement system, one way of reporting the data, one set of interpretations. Providers get this data under all our names, and it gets their attention. They love it and some of them have implemented their own pay-for-performance within the IPA based on this,” she stated. The business case has to be made for both the receiver and the giver of the information, making the cost more tolerable because they get something in return.
Ms. Coltin’s examples of information given back included populating disease management registries that list a provider’s diabetic or asthmatic patients and the dates of their last tests. She added that they would love to be able to put lab results into the registries, because most of these physicians do not have electronic medical records. She also noted that they are talking with other health plans about agreeing on a common format for the registries.
The business case is always going to be an economic one or a quality business case that is profound, Dr. Kamil observed, stating that his organization has tried to make the business case carry its own weight. Whatever is collected has to be very pragmatic and not expensive or unreliable. He feels that the only way to keep quality program costs low is through administrative data, which also has to be easy for providers. If it becomes extra work, physicians will be very resistant to doing anything. To have performance systems grow, they have to be administratively easy. They have to have a return on investment for everyone involved
Dr. Edinger asked whether, over time, there will be problems as pay-for-performance systems evolve, and how would this impact data collection. Would it become exclusion for not meeting certain performance goals? Dr. Kamil agreed that problems could be expected when providers feel excluded, but as long as the data collection is part of normal health care, it would work.
Dr. Holmes asked for comments on whether risks adjustments mechanisms are built into pay-for-performance programs, to which Ms. Coltin replied that their clinical quality programs are based on process measures, so do not require risk adjustment. If they wish to use outcome measures and have lab data, then it will be necessary to account for differences in the patient populations. Finding efficient providers would require some risk adjustment methodology to be able to truly identify them, Dr. Kamil added, noting that risk adjustment is important for health plans if they are really going to segregate networks
Regarding the providers’ quality measures and administrative data, Dr. Kamil insisted that they try hard not to invent any measures and tend to use standard HEDIS performance measures and adapted for individual physicians in the PPO program. He also clarified that they do get laboratory data from the labs. It is hard because it is work for the labs and there is no standardized way of delivering the data. Dr. Carr noted Dr. Kamil’s support of lab data as long as it need not be done manually, but questioned whether that relationship exists everywhere.
This committee has no business developing measures, asserted Mr. Hungate, noting his impression that NCQA and JCAHCO, are in effect, the de facto standardizers of information, to which Ms. Coltin added the National Quality Forum. She described their seeking of developed measures that have been validated and are based on evidence, and reviewing those for possible endorsement. NQF then provides a resource of sets of measures. She also pointed out that AHRQ has developed the National Quality Measures Clearinghouse, standardizing what is known about each of these measures. The criteria for NQF’s endorsing a measure are not related to the availability of data, Ms. Coltin reported. NQF just says whether it is a good measure, and they do not have to contend with burden issues or push back around collecting the measure.
Next Steps — Plans for Future Hearings
Mr. Hungate asked the committee where they see their value-added in this process and what their unique contribution is. Dr. Cohn sees their role as fundamental data development. That can be captured, Dr. Steinwachs explained, by saying this is the kind of business case we heard and the rationale for each of these recommendations, and its potential importance in health care in America and advancing the quality agenda. He favored talking with the groups in September about the process to incorporate this into standards for electronic claims or attachments.
Talking to the vendors about including available information in systems was proposed, but Dr. Cohn felt it was premature. Ms. Greenberg favored talking with vendors of electronic health record systems to see the extent to which they are building in the functionality to report quality measures, and what the challenges and barriers are. She asked whether the groups that are vetting measures have the capacity to move the field towards a more standardized set of measures, whether there is a process toward some type of consistency in sets of measures.
Ms. Coltin noted areas where measures are needed and do not yet exist and conflicting, duplicative measures needing evaluation and consensus. She understood that to be the role of the NQF. She stated that they strongly encourage measurement of unaddressed areas on IOM’s top 20 list, and suggested that the Workgroup pursue offers made the first day of the hearing. NQF could say what of their current approved measures depend upon data elements under consideration. The business cases are already developed for these measures and AHRQ can provide good insights on the cost side. She also noted an offer to describe important areas related to the IOM top 20 for which quality measures are desired but not possible due to data limitations.
Dr. Edinger suggested having presentations on the various forms, what the data collection elements and problems are, and have someone like Jack Needleman review the problems in collecting this kind of data, so the group is more informed on these issues. It was agreed that the next hearing will include CPT category two performance measurement codes, learning about the 837, the DSMOs, and talking to NUBC and NUCC to find out their activity on these eight items.
Dr. Cohn stated that the group is still data gathering and warned that trying to make conclusions before having data can lead to the wrong conclusions. He feels there may be surprising things going on in some areas that may cause you to rethink what are low hanging fruit, what things need a little push or a little help to become successful. Ms. Greenberg asked whether to wait to take 3, 7 and 8 to the full committee until after the September 14 meeting. Dr. Cohn favored hearing views from those responsible for the standards first. If they plan to implement these things already, this Workgroup just gives its approval. If there is some barrier, he proposed offering support or help to overcome the barrier.
It was agreed that Ms. Greenberg would present to the NUBC in August that there appears to be low hanging fruit, this appears by everybody to be pretty obvious. Dr. Cohn added that the outcome of that meeting would determine whether or not a letter is needed to the full committee.
I hereby certify that, to the best of my knowledge, the foregoing summary of minutes is accurate and complete.
Robert W. Hungate