Department of Health and Human Services


Quality Workgroup Hearing on Individual Health and Population Health: Potential Impact of the Electronic Health Record


November 18, 2005

Washington, D.C.

Meeting Minutes and Thematic Summary


The Quality Workgroup of the National Committee on Vital and Health Statistics was convened on November 18, 2005, at the Hubert H. Humphrey Building in Washington, D.C. The meeting was open to the public. Present:

Workgroup members

Robert W. Hungate, Chair

Carol J. McCall, F.S.A., M.A.A.A., Vice Chair

Justine M.Carr, M.D.

William J. Scanlon, Ph.D.

Staff and liaisons

Marjorie Greenberg, NCHS/CDC, Executive Secretary

Debbie Jackson, NCHS

J. Michael Fitzmaurice, Ph.D., AHRQ liaison

Julia Holmes, Ph.D., NCHS

Gail Janes, Ph.D., CDC

Eduardo Ortiz, M.D., VA


Others (not including presenters)


Lynn Boyd, College of American Pathologists

Susan Christensen, Health Information Technology Group

Meghna Ranganathan, RWJF

Dan Rode, AHIMA

Erin Matthews, American Society of Clinical Oncology

Wanda Govan-Jenkins, CDC

Susan Baird Kanaan




The Quality Workgroup organized this hearing to talk with experts in two panels about the potential effects of electronic health records (EHRs) on individual health and population health. The participants addressed the core questions of how EHRs can be used to improve health quality, and what building blocks are needed in EHRs so that they serve quality purposes. The hearing had a dual purpose: both long-range visioning and consideration of what needs to happen in the short term. The members of Panel One were asked for their perspectives on the uses of the EHR and health information technology to assess and improve population health. Those in Panel Two focused on how the EHR is positioned to provide information for assessing and improving the health at the individual and population level. The speakers were:


Panel One: Users of EHRs

David Kibbe, M.D., American Association of Family Practitioners

John Lumpkin, M.D., Robert Wood Johnson Foundation

David Lansky, Ph.D., Markle Foundation

Victor Villagra, Health and Technology Vector


Panel Two: Developers/Suppliers of EHRs

Stan Huff, M.D., Intermountain Health Care

Donald Rucker, M.D., Siemens Medical Solutions

J. Peter Geerlofs, M.D., Allscripts, LLC

Ross Fletcher, M.D., Washington, D.C., VA Medical Center


The major themes and key messages that emerged from the hearing are summarized below. (In addition, see pages 19 &ff. of the detailed summary for highlights.)


The population health dimension and related macro perspectives

  1. Human health should be the central focus of quality measurement. Government’s focus, and thus that of NCVHS, should be assessing and improving the health of the population, based on national goals for health improvement.
  2. A policy and analytic level is missing from the quality discussion at present.
  3. The reality of health disparities points to the need for attention to this dimension of quality issues and thus for demographic data.
  4. There is an absence of public will to measure important aspects of quality.
  5. There is a lack of public discussion of the uses of data for quality at the population level.
  6. A pivotal issue is the role of a trusted and competent aggregator.
  7. Research question: Do good quality measures scale up to the population level?
  8. On framing and naming the technology: It is useful to avoid the “record” paradigm and instead use “system” terminology. Similarly, a network metaphor and model may be preferable to an institutional (EHR-based) model.
  9. It’s an open question whether the desktop EHR/EMR can help improve individual and/or population health, as distinct from helping to measure each of these.


The importance of patient-centricity

  1. How can the country move from the current institutional environment to a person-centered information and measurement environment that centers analysis on the person as the central decision-maker and information source?
  2. Quality measures should be patient-centered, not provider-centered.
  3. Quality measures should also be consumer-relevant.
  4. Information from patients is needed to assess outcomes. This points to the role of the PHR and patient input.
  5. The patient and family may be the most appropriate “virtual aggregator” and integrator of health data on themselves.


The health care system; financial and economic issues

  1. The current context for quality issues: payers are concerned about cost while health care leaders focus on expansion.
  2. Current spending on health care is not buying value. Disparities are a major factor, although most Americans are getting sub-standard care.
  3. The quality agenda is losing out to other priorities in health care and among payers and employers. Competition is not currently based on quality.
  4. The lack of a business case for using information to promote health is even greater at the population health level.
  5. It is important to understand the differences between outpatient and (more complex) inpatient arenas in regard to technology adoption and quality.
  6. The current delivery system may lack the capacity to employ emerging information technology and health care quality strategies.
  7. Incentives need to be aligned with national adoption and quality goals.
  8. Currently, the Pay for Performance (P4P) trend is disconnected from the trend toward EHRs and the NHIN, and contributes to fragmentation.


EHR adoption patterns and prospects: implications for quality

  1. A transformative trend toward EHR adoption is under way.
  2. The momentum toward EHR adoption increases the urgency of moving forward with quality measurement strategies.
  3. The phases of technology adoption: substitutive, innovative and transformative.
  4. There’s a lot of low-hanging fruit related to quality if adoption can be increased and users can be moved to the innovative and transformative phases of adoption.
  5. To some, the key issue for quality is not EHR adoption but the lack of will to measure important aspects of quality, along with the lack of strategy.


Centrality of standards, interoperability and connectivity

  1. Standards are a crucial EHR design specification. EHRs need standard terminology and vocabulary, including the action words.
  2. Quality measures need to be standardized; AQA is making progress on this.
  3. Standardization is needed to permit sharing of decision support tools, medical knowledge and protocols.
  4. A Workgroup member envisions mechanisms in the market for open transfer of discoveries. A movement in that direction is efforts to standardize the application programming interface (API), which could be transformative.


Improving quality measurement

  1. Most current work on quality is way too granular and fails to capture the more complex aspects of health and health care.
  2. There is a risk of further institutionalizing silos if current approaches are digitized.
  3. The consensus process now used to develop quality measures is not good and has not led to good measures.
  4. Quality measures should capture the life continuum, the disease continuum, hand-offs between settings, the interface between the patient’s family and the system, the relationship between financing and care delivery. The ultimate focus: human health. The intermediate focus: health improvement.
  5. Demographic data are needed for assessment and reduction of health disparities.
  6. Information on outcomes, the health experience, etc. must come from patients.
  7. Research question: What do consumers want? Do measures capture this?
  8. Process measures should be avoided as the fundamental mechanism of quality assessment; but process measures that truly represent health improvement are acceptable.
  9. The best quality measures are a byproduct of automated workflow.
  10. Wayne Gretsky: I skate to where the puck will be.


The quality measurement and improvement “package”

  1. EHRs, even at their best, are just a tool.
  2. A suite of applications is involved in quality improvement. A key element is the interface between the EHR and decision support.
  3. The case examples of Intermountain Health Care and the Veterans Administration show the use of EHRs and systems and institutional processes to evaluate and promote quality of care. Among other things, they demonstrate standardization, interoperability, the processes noted above, and the business case for quality.
  4. Interpersonal, institutional and thought processes must be part of the quality package, as the above cases illustrate. People have to decide what to focus on and put quality, properly defined, at the center as the focus of accountability and support.
  5. The disease management model as well as the VA and IHC cases demonstrate the population-level health quality uses of automated clinical data.
  6. Other relevant models and concepts: process automation and Wagner’s chronic care model.


Suggestions for NCVHS and the Quality Workgroup

  1. Contribute the missing analytic and policy dimension.
  2. Connect the individual and population perspectives.
  3. Figure out how to implement patient-centricity with respect to quality.
  4. Help move beyond the “record” paradigm.
  5. Drive a national discussion of best practices.
  6. Address the capacity of the NHIN and EHRs to measure and improve population health.
  7. Get the word out to the public about the evidence for using the NHIN for quality.




Individual Health and Population Health:

Potential Impact of the Electronic Health Record





The Quality Workgroup organized this hearing to talk with experts about the potential effects of electronic health records (EHRs) on individual health and population health. Following introductions around the room, Mr. Hungate explained that the hearing has a dual purpose: long-range visioning, and consideration of what needs to happen in the short term. He reviewed the evolution of the Workgroup’s thinking in recent years. After completing its 2004 report and recommendations, which focused on improving claims information, the Workgroup shifted its attention to facilitating use of the EHR to improve quality and health. A June, 2005 hearing helped to clarify the thinking about this emphasis, which is partly framed in terms of the secondary uses of clinical data. The short-term question regarding secondary uses is whether the EHR is being designed appropriately to produce the content needed to assess and improve quality.


Dr. Carr commented that the core question for the Workgroup is, What quality will come out of the EHR? She elaborated as follows: What building blocks are needed in EHRs to make it possible to achieve both primary and secondary quality goals, configured in a dynamic way to enable responses to new evidence, questions and metrics? She noted the sense of urgency caused by the proliferation of pay for performance measurement requirements and the disconnect between these developments and the EHRs now taking shape.


The first (morning) panel, composed of information users, was asked to provide their perspectives on the uses of the EHR and health information technology (HIT) to assess and improve population health. They were sent these questions in advance:

1)      How will widespread use of electronic medical records improve the information for measuring and improving health?

2)      Will it have different impacts for population health than individual health?  How? Why?  What are the translation issues?

3)      What important things will it not do?

4)      Of the six elements in the IOM quality definition, which two elements will show the most change and which the least?

5)      What worries you the most about this transition?  Or, what are the mistakes we are most likely to make?



David Kibbe, M.D., American Academy of Family Physicians (AAFP)


Dr. Kibbe runs AAFP’s Center for Health Information Technology, which helps its  members acquire standards-based, affordable EHRs. AAFP has 60,000 members, primarily in small and medium-sized medical practices. This group’s use of commercial EHR systems has grown from 10 percent in 2002 to 30 percent in 2005. He noted that AAFP approaches quality and HIT questions in terms of a suite of applications, not just the EHR, providing opportunities for quality measure collection, reporting and feedback into an improvement cycle. He agreed with the Workgroup that these purposes must be thought through ahead of adoption to create the desired capacity.


He stressed that this is a time of rapid growth in EHR adoption by family physicians, with collection of quality and performance data as a routine by-product. He also called attention to the fundamental differences between inpatient and ambulatory care as they affect EHR adoption. The former has large revenues per encounter and relatively few admissions; the latter has large volume and small revenue per encounter. The two markets have different vendors, buying cycles and infrastructures. The ambulatory care market tends to be more innovative because it is more likely to be buying new information technology.


AAFP surveys its members about EHR adoption. Regarding the barriers to using them, the 2005 respondents not using EHRs indicate they are less concerned about cost and other barriers than in previous years, and more ready to purchase EHRs. Among EHR users, overall satisfaction is high. Most important, he said, the total cost of ownership of integrated EHR systems is “approaching affordability” at $7,232 per doctor per year  (averaged over three years) for small practices. Dr. Kibbe noted that this figure is well below the $30,000 or $40,000 figure “often quoted in Washington.” He added that EHRs may actually be cheaper now than the cost of transcription. So the issue of return on investment is becoming much easier to justify.


The key driver behind the value proposition of EHRs, he said, is interoperability and connectivity. He cited several areas relevant to quality in which progress is being made, including the passage of the Continuity of Care Record (CCR) as a fully validated ASTM and accredited ANSI standard and progress with ELINCs and e-prescribing standardization. There are problems with DOQ-IT, but he still thinks it is the right idea and should be made a national priority. AAFP is very disappointed with the G-code scheme, which backs off from using real, practice-based data.


Finally, Dr. Kibbe said the way to assure that physicians can use EHRs to reliably, dependably and accurately export quality, performance and cost information to data arregators is to leverage the early successes with EHR adoption. He offered these recommendations for working toward this goal:

  • Continue to work with federal and state governments and private health plans to help finance affordable EHR standards.
  • Pilot the automated export of quality and performance data from practice EHRs to data aggregators.
  • Continue progress within the Ambulatory Care Quality Alliance (AQA) on standardizing measures. He called AQA “a very important group … where some of this work can be done.”

John Lumpkin, M.D., Robert Wood Johnson Foundation (RWJF)

Dr. Lumpkin noted that a transformation is truly under way in increasing EHR adoption, and that now is the time to address quality issues. His remarks focused first on the context for improving quality in the electronic age. An RWJF study found that affordability is business leaders’ primary concern with respect to health care (52%); only 12% ranked quality as their top concern. This is a shift from the 1990s when business was pushing the issue of quality. Businesses are concerned that their employees will be unable to pay for their health care (79%). He juxtaposed this with the findings of an RWJF survey that health care leaders are not focusing on cost control strategies but rather on expansions. Add to this the finding by researcher Elizabeth McGlynn, et al., that half of all health care does not meet standards of care; “much of our health care spending has no value.” He also cited studies showing a lack of association between the cost and quality of care, disparities in the quality of care for African Americans and whites, and disparities in the quality care for entire regional populations as a function of the proportion of minorities in the region. The latter studies demonstrate, he said, the importance of having demographic data on  populations in order to measure and assess quality and disparities. He also showed research findings suggesting that adherence to treatment guidelines can have a particularly positive impact on the treatment and health of minorities.


Dr. Lumpkin then introduced the Chronic Care Model, developed by Ed Wagner. It illustrates the importance of a group of factors including clinical delivery system design, decisional support, clinical information systems, and community resources and policies as well as productive interactions between informed, activated patients and prepared, proactive practice teams. He asked, Why haven’t we solved the problem of quality? and then offered these explanations:

  • The individualistic culture of medicine
  • The fact that quality is invisible to consumers and providers
  • The lack of a business imperative for quality
  • The lack of connections between parts of the system (hospital, office practice, home, pharmacy)


Referring to an information-flow model developed by Connecting for Health, Dr. Lumpkin commented on the importance of remembering that the ultimate objective is to assure quality for patients in the places where they receive care. The challenge is to design EHRs and EHR systems to meet the need of patients and providers, and to be sure to look at the right data (he illustrated from the pro baseball arena, citing the book Money Ball). Having data to measure quality, including data on disparities, has to be a first principle. Using another sports analogy to illustrate the desired approach to measurement, he noted Wayne Gretsky’s comment that he “skates to where the puck will be.” Finally, he commented on the need to go beyond the consensus process, which is the current method of building measures; and also on the need to move the unit of measurement from provider- to patient-centered measures, with the core objective of moving knowledge to the point of service.


David Lansky, Ph.D., Markle Foundation

Dr. Lansky approached the topic of quality in terms of the information needs of patients/consumers and society as a whole. Speaking as a critic of current measurement models, he asserted that it will not be fruitful to apply information technology to current measurement strategies. The opportunity at hand is to think about the framework in a broader sense and to question the assumptions of the current environment. Dr. Lansky said he envisioned a role for NCVHS in contributing an analytic and policy dimension to existing work on quality measurement. Current work tends to focus at a level of granularity that frustrates any attempt to evaluate health care and health. He asserted that the primary problem now facing quality measurement is the lack of will to measure important aspects of quality. The issue of human health, he said, should be paramount in the discussions of how the IT infrastructure will measure improvements, decrements and disparities in health. He expressed concern that the current large-scale investment in health IT will only exacerbate silo fragmentation if these broad policy and resource allocation questions are not addressed.


Dr. Lansky called attention as well to a conceptual fragmentation that NCVHS may be able to address: Existing measurement strategies are insensitive to the life continuum, the disease continuum, hand-offs between settings, the interface between the patient’s family and the system, and the relationship between financing and care delivery. It is important in the current design phase to assess how to integrate all these factors. He cautioned against instituting process measures as the fundamental mechanism of quality assessment. Instead, the national policy structure should focus on identifying national goals for health improvement and the measures to address and evaluate the achievement of those goals. Another concern is that the U.S. will digitize the silos that have arisen largely for business reasons (e.g., funding and licensing), rather than creating a digital climate that makes it possible to transcend the silos. In addition, he proposed as a thought exercise that the Workgroup center its analysis on the person as the ultimate source of every piece of information in the health care environment and the central decision-maker in the system’s resource use, and that it then evaluate how to get from the current institutional environment to a person-centered information and measurement environment.


Dr. Lansky suggested that it may be more fruitful to imagine a digital information environment that is not reliant on the EHR, given that much health information is already available in distributed networks and this is the direction HIT is heading. A network metaphor rather than an institutional metaphor thus may be useful in thinking about the platform for quality measurement. Citing Google’s approach to information, he recommended a comprehensive analysis of network models of information handling.


Finally, he cited the policy around the Medicare Part D Drug benefit as an example of the need for a new mindset, expressing dismay at the apparent absence of any public discussion of creating an evaluation framework for this very expensive health benefit. He cited this gap as an example of the unique role the public sector could and should play in defining the criteria for evaluating public value, putting a measurement infrastructure in place, and determining how the IT environment can support the evaluation process.


Victor Villagra, M.D., Health and Technology Vector


Dr. Villagra described his work with disease management (DM) programs and noted the rapid growth in the use of EHRs by health professionals. The DM phenomenon provides a useful lens for understanding the link between EHRs and population-based quality improvement because it is outside the traditional delivery system, interdisciplinary, widely adopted by payers, and used as a vehicle to improve quality and performance. EHRs play a critical role in DM, populated with information from multiple sources including patients. The engine that drives quality is shared values among stakeholders, each of which derives value—e.g., clinical quality, patient satisfaction and cost containment—from the arrangement. Dr. Villagra described the information system components and utilities in DM and how they work together based on these shared values. As his first major point about quality and EHRs, he stressed that the role of EHRs should be subservient to the attainment of broad, shared values and objectives.


Second, he stated that attaining superior quality requires the deployment of a different organization of care and delivery system than what is now available. The traditional delivery system is not capable of absorbing increased information output, nor can traditional care settings accommodate the multiple activities needed to bring about changes like patient education and motivation. Fulfilling the promise of the EHR to improve quality thus requires transformation of the delivery system to (for example) manage large-scale coordinated actions such as sophisticated call centers and to handle large volumes of information and transmit it to patients.


Third, he recommended a dramatic change in reimbursement strategy in favor of outcomes-based payment that explicitly rewards quality. He cited a paper by Robert Miller in Health Affairs, “The value of electronic health records in solo or small group practices,” which shows shifting attitudes toward the primacy of quality.


Fourth, he cited the importance of the ability to aggregate data in a network environment, as is possible in the DM model. Determining how to aggregate and leverage data for public health purposes, with proper privacy and security protections, will be a major challenge as the field moves forward.


Commentary: Dan Friedman, Ph.D.

Dr. Friedman noted five themes and issues raised by the foregoing presentations. First, he noted the important distinctions in the role of EHRs (which he called “the desktop electronic medical record”) vis a vis 1) improving the quality of individual health care, 2) improving the quality of individual health, and 3) improving the quality of health at a population level.


Second, he noted the need for discussion and debate on the question of whether the EHR will actually improve the health and health care of individuals and populations, as distinct from improving themeasurement of these values.


Third, he called attention to the role of data aggregators, which will be pivotal in improving the measurement of the health of populations. “It is all about scaling up,” he said. He added that while there are technical and architectural issues involved in scaling up, the major issues are political, as the experience of other countries demonstrates.


Fourth, expanding on Dr. Lumpkin’s point that the current health care market does not base competition on quality, Dr. Friedman observed that it is even more the case that competition in the health care marketplace is not based on the secondary uses of EHRs for population health measurement purposes. This fact, he said, needs to be acknowledged.


Finally, he pointed to the “disturbing and instructive” fact that the population health arena does not enjoy the kind of public interest in and use of data that are common in the professional baseball arena (as evidenced by popular books on the subject). In other words, no public discussion is taking place about how EHRs might be used to improve the measurement of population health.




Dr. Kibbe agreed with Dr. Lansky’s criticisms of the desktop EHR and the preferability of having a few companies provide low-cost ASP model network systems. However, this is unlikely to happen soon, he said, and he advised against trying to fight the trend of medical practices acquiring their own infrastructures. He expressed concern about another trend, that of large organizations building very expensive private networks. Both trends will make it more difficult to get the data needed for population studies. He asked, How can we get the most accurate information in quick turnaround time into a trusted and competent data aggregator so it can be analyzed?


Mr. Hungate asked whether a population-based measurement system is needed as a benchmark against which others can judge their performance. Dr. Kibbe noted Dr. Lansky’s observation that there is no political will to do this.


Noting Dr. Kibbe’s optimism about EHR adoption, Dr. Scanlon him to comment on the momentum toward pay-for-performance. Dr. Kibbe pointed to the lack of interoperability between EHRs and health plan systems and the current impossibility of meaningful, patient-centered, all-payer aggregation. He noted AAFP’s focus on the CCR as a container for information on patients that is independent of EHRs. This is a short-term approach in the absence of a centralized, trusted entity to aggregate data from different sources. He added that even once there is a data aggregator, it will not be possible to extract data from many sources.


Ms. McCall asked whether Dr. Kibbe was seeing interest in quality measurement in the small practices now adopting EHRs. He responded that most practices express interest in quality but seem to define it in a practice-centered way, related mainly to improving work flow and finances rather than patient care.


Depicting health care information systems as a nervous system, Ms. McCall speculated that sometimes, physicians want the information to feed back into practice quickly, as with the autonomous nervous system, rather than having it subjected to lots of “cognitive chewing.” She asked Dr. Kibbe to comment on the array of terms in this arena, notably the similarities or differences between an EHR and a CCR. Dr. Kibbe agreed that the semantics are “really confused,” which is an important issue. The term “electronic health record” can have many meanings; in contrast, the CCR is a highly defined content standard designed to allow a defined set of health data to go from one computer system to another (“from Company A to Company B”) and be read and understood. It is, in effect, a snapshot and it contains a lot of information that could be used for quality measurement.


Mr. Hungate reminded the group of the need to give equal attention in its discussion to both long-term objectives and short-term priorities for developing the EHR for quality purposes. He then asked about the potential of disease management as a basis for aggregating, measuring and evaluating population health. Dr. Villagra stressed that DM addresses many diseases at once, clustered around the patient, and the EHRs for DM are able to house information across diseases. The need to integrate in this way has led to meta-guidelines for treating coexisting conditions.


On the terminology question, Dr. Lumpkin noted that the NCVHS NHII Workgroup avoided the term “health record” when it developed Information for Health. He favors terms that include the word “system” (e.g., personal health information system, provider-based health information system) because of the multiple meanings of EHR and because of the importance of including decision support along with the record. He added that while DM will continue to have an important role, other trends such as consumer-driven health care and better decision support are moving in other directions.


Dr. Lansky noted that the present discussion is about forms of virtual integration of care delivery and coordination. From an IT point of view, this integration can happen either with the payer or DM entity as an aggregator of information, or with the patient/family integrating their own data. Connecting for Health emphasizes the latter approach because only patients can supply data about outcomes, patient experience and changing needs. He favors analyzing how to enable the person to be the virtual integrator of the information flow. Regarding DM, he observed that an optimal information and quality measurement environment would allow that model and many others to proliferate and be evaluated as to their health effects.


Dr. Ortiz commented that the VA uses many process measures as well as intermediate outcome measures, some of which are evidence-based, and he asked Dr. Lansky what he thought the VA should be measuring. Dr. Lansky affirmed that the VA should measure just what it is measuring; he explained that his comments were directed at the role of public policy and national strategy in setting a measurement and IT agenda. In particular, he mentioned the need for “consumer-relevant measures” that can help people understand whether health spending (whatever its source) is producing the desired health benefits. Primary and intermediate outcomes are important to the extent that they contribute to improvements in health, which is where the focus should be. He noted the benefits of the PHR as a platform for information on endpoints. Asked if we know enough to know what we should be measuring, he observed that there is a rigorous set of outcome measures in the world of the FDA and clinical trials, but little outside that area. He added that we know enough in some areas to compile rigorous, well-documented measures.


Dr. Lumpkin noted that measures tend to be designed for available data, and the problem is how to get the data on other factors, such as functional status. Dr. Villagra noted the rise in activity to cash in on pay for performance schemes, a trend that may be at the expense of other important processes and perspectives. He recommended finding a way to get patient input on outcomes and tie this to payment incentives.


Dr. Holmes shifted the focus by asking for the speakers’ thoughts on what objectives the Workgroup should pursue to advance quality in health care. Ms. McCall framed this as a request for their top three wishes.


Dr. Lumpkin recommended a combination of short- and long-term work:

1)     Look at the quality applications that can be inserted in the EHR (for example, a common interface between decision support and the EHR).

2)     Think outside the medical record paradigm to consider what can be done when data are digitized and in systems, bearing in mind the goal of pushing knowledge to the point of service.

3)     Think about ways to enable a patient-centric concept of quality.


Dr. Lansky agreed with these recommendations and added the following:

1)     Work through scenarios for the next couple of years using alternative architectural models for (a) broad EHR adoption or (b) a network digital database model.

2)     Lay out assumptions about the availability of data for public disclosure, based on careful analysis of the legal and business issues associated with rights and intellectual property. (He added that it will be necessary to assume voluntary disclosure by individual data holders.)

3)     Identify a few short-term (~3-year) use cases (e.g., the Medicare prescription drug benefit) and use those to drive analysis of what it would take to implement the IT infrastructure and the information requirements.


Dr. Villagra had these recommendations:

1)     Understand what delivery infrastructure is capable of executing the acquisition of new data and knowledge related to implementation of a health record, so the technology does not outstrip the executive capabilities to use it.

2)     Examine the requisites for an entity or institution to serve as the potential aggregator of information. This entity will aggregate information, allow analysis of population health, intersect with delivery systems. (He recommended building the aggregator function on care coordination.)

3)     Articulate what the country wants from its health care, possibly based on IOM reports, in order to identify the shared values that underlie the quality effort.


Following on these comments, Ms. McCall asked for comments on the compelling events or “burning platforms” that can be used to ignite these issues. She cited as possible examples the Chronic Care Improvement Program (CCIP) pilots and Pay for Performance (P4P).


Dr. Lansky cited the British Excellence Model as the best example of using P4P to reward IT adoption within an outcomes-oriented approach. He noted that Ms. McCall’s two examples are at opposite ends of the spectrum: the CCIP is about integrating care while P4P is fragmentary and unable to address the continuum.


Dr. Lumpkin predicted that PHR systems will be the disruptive innovation in health care, driven by people’s need for more information to help them make the decisions compelled by health savings accounts. If people can’t get the data they need, they will get angry and go to their elected officials and their providers, pushing them toward adoption, as well.


Dr. Villagra added that medical risk-management, to which physicians are very sensitive, can push the momentum toward utilities that align data and quality with other benefits.


Dr. Scanlon noted the presenters’ comments about the transformation of the health system and expressed his pessimism that P4P will be capable of effecting significant positive change, given the resistance even small changes generate and the fact that P4P is designed to be budget-neutral. Dr. Villagra wondered about the potential, within this constraint, for rechanneling health care away from ineffective care and redistributing income in positive, outcome-oriented directions.




The panelists in this group were sent the following questions prior to the hearing:

1)      If health improvement is a good statement of quality from either a patient’s or population view, what will be the most valuable contributions from wide-spread adoption and use of interconnected EMRs?

2)      What assumptions are you using in making that choice?

3)      Setting aside considerations of privacy, security and confidentiality for this answer, what are we most likely to mess-up in seeking to improve health? 

4)      How will EMRs address the Pay-for-Performance demands?  Are there actions not yet underway that are needed?

5)      What kind of changes do you expect in the medical knowledge management system from this technology? Short term 3-5 yrs?  Long term?

6)      What control systems are needed to make decision support systems and tools achieve the quality through process improvement that is so much a part of expectations?


Mr. Hungate said this afternoon session would focus on how the EHR is positioned to serve the demands of information users like the members of Panel One. He noted the Workgroup’s concern that the requirements of the quality agenda are not adequately addressed in the EHR rollout. He noted that Dr. Huff, an NCVHS member, is helping the Committee understand the secondary uses of data, a core focus of attention.

Stan Huff, M.D., Intermountain Health Care


Intermountain Health Care (IHC) is a not-for-profit corporation with 22 hospitals, 24 clinics, and 1.4 million patients for whom it has EHRs. Dr. Huff reviewed IHC’s system design considerations, including these: speed is everything; business events trump good system design; and good people and sustained effort are the only insurance of good design. The EHR system is patient-centered and longitudinal. It permits data sharing from a common repository using common terminology services, a formal information model and modular architecture. He stressed the importance of building in decision support from the ground up as a modular part of the system, in a manner that accommodates rapid changes in decision logic. Finally, standards are the future in terms of interoperability.


IHC’s goal is to “provide fast, least expensive and highest quality patient care”—a goal it believes it can only meet by using a computer-based system. It is designing a system in which the computer is an active part of patient care. The computers need to actively support these uses: real-time patient-specific decision support, data-sharing, sharing of the decision logic, bio-surveillance, data analysis and reporting, and clinical research. Decision support involves alerts, reminders, protocols, advising, critiquing, interpretation and management (the latter involving purpose-specific aggregation and presentation of data).


Dr. Huff showed a diagram of the IHC system, which has a clinical data repository that connects to ancillary systems through an interface engine that converts information into standard, structured and coded representations against which decision logic can be used for patient care decisions. The coding uses a health data dictionary that corresponds fairly closely to LOINC and SNOMED codes. Ten people support the data dictionary, to map the concepts to ancillary systems and create new concepts and data structures. Twenty-six people provide the more than 60 interfaces, which are split about evenly between clinical and administrative purposes.


He then described the uses of this system to evaluate and promote quality of care. IHC has instituted computerized protocols, alerts and other mechanisms. For example, he showed longitudinal data on physician testing of diabetic patients for HbA1c. A computerized protocol produces personalized reports and shows which patients are out of protocol; it has resulted in dramatic improvements in physician performance. Similarly, he showed protocols and positive results related to adverse drug events, elective childbirth inductions before 39 weeks, and well newborn bilirubin testing. In most cases, the improvements have translated into significant savings for IHP; in the case of bilirubin testing the quality improvement cost IHC money because the cost of testing does not compensate for the resulting decrease in readmissions. This, he said, illustrates the need to align incentives with quality goals.


Finally, Dr. Huff offered several recommendations. He noted that interoperability does not, by itself, achieve quality; IHC experience illustrates that the process is enabled by the EHR but driven by people who create mechanisms to use information to drive system change and improvement. Second, incentives must be realigned to reward the entire system, starting with those who collect the data. It takes providers time to create good data, and at present they do not benefit from the results as much as others do. This issue is in addition to the disincentive noted above. Third, Dr. Huff posed the question, How do we initiate change in the practice of medicine? He noted that systems must compensate for the fact that physicians cannot remember everything. Research is needed to determine how computers can help people create quality, with a way to share the computerized protocols across institutions.


J. Peter Geerlofs, M.D., Allscripts, LLC


Dr. Geerlofs stressed that EHRs are just a tool and are meaningless without the right processes, people and thought about their use. He noted the growing interest in EHRs among physicians in the last 12-15 months. Allscripts is mainly focused on large ambulatory care practices. More than 20,000 physicians use its tools to write more than a million prescriptions a month.


He noted the presence of a lot of low-hanging fruit that can be harvested, with significant impact, if physicians have the right incentives to adopt the tools. His company’s motto regarding electronic tools is “If doctors don’t use it, nothing else matters.” The U.S. has had the technology to implement EHRs for 15 years, and Allscripts has given a lot of thought to the question of why doctors don’t use the systems. Noting the importance of cultural factors and will, he outlined the three phases of technology adoption: substitutive, innovative and transformative.  In the first, new technology is understood in the context of old technology (e.g., horseless carriages). Dr. Geerlofs noted that it is important to get rid of the term “electronic medical record” because it is a highly substitutive term. In the second, innovative phase, people see the possibility of different, more effective and creative uses of the new tool. Within a year, this leads to true transformation in which the technology is used in ways that could not have been imagined. Allscripts has identified drivers to get physicians past the first phase of EHR adoption. He noted that usually, speed is their primary interest, not quality; once they move forward in the adoption cycle, they begin to see other benefits and uses. He said his “plea” to NCVHS is simply to get started and get electronic records in the hands of as many clinicians and consumers as possible.


The deterrents to physician adoption, in his view, include large numbers of proscriptive protocols and interruptive alerts as well as inflexible note entry. Allscripts favors a referential decision support model that makes it easy for doctors “to do the right thing” and harder to “do the wrong thing.” It will introduce 2,000 Guideline Templates in the course of 2006, a patient-centric tool to help doctors decide on care plans and communicate them to patients. Dr. Geerlofs described the templates and showed a screenshot of one. They not only drive the health management plan but automatically capture the most important data and serve as the basis for prospective functional outcomes questionnaires to patients.


Donald Rucker, M.D., Siemens Medical Solutions


Dr. Rucker highlighted process automation as the best way to achieve quality. It involves a fundamental rethinking and breaking down of the steps. He illustrated his point with case studies including Henry Ford and FedEx, and then recommended figuring out “how to drive that in health care.” In that sphere, manual quality data capture is too expensive; quality needs to become a by-product of automation, with data capture as part of the workflow. He recommended targeting first what needs to be changed and then working backwards to “the variables.” He noted that the current work in the Department on vocabulary is similar to the field of enterprise software architecture, and he recommended that the Workgroup talk with people working in this field.


He then focused on specific tools that can generate useful health care data and process engineering, noting the confusion about structured versus free-text data and stressing the importance of plans rather than signs and symptoms. He stressed that physician behavior is based on work flows; they can be motivated by opportunities to improve their work flows. He reviewed a series of tools to improve clinical and information processes: natural language processing, vocabulary services, workflow engines (automated flow charts) and communication automation (voice-over Internet protocol or VOIP). These new technologies, he said, can help produce quality data. “The ultimate simple quality statistic” is cycle time, which can save lives. He showed data from institutions that have achieved huge improvements in this area and suggested that NCVHS come up with a metric of cycle time.


In conclusion, Dr. Rucker reiterated that the best quality data come as a byproduct of quality automation. He proposed being “very clever” about vocabulary tools and services, and praised the Department’s RHIO initiative. If all else fails, he recommended simply paying for “lots of performance”; people will have to buy IT if they’re asked for “hundreds of core measures.”


Ross Fletcher, M.D., Washington, D.C. Veterans Administration Medical Center


Like several other speakers, Dr. Fletcher noted that he had been involved in IT since the 1970s. He said his remarks would illustrate how improvement occurs with widespread adoption interconnected with the EHR. The VA’s IT system, VistA, connects 6 or 7 million patients, 180 hospitals and 800 clinics in a single system. Adoption was organized from the top down after the director and staff became committed to it. Using expert and non-expert advisors, the VA system creates local ownership wherever possible through such mechanisms as customized templates and reminders. Other diffusion strategies have been packaging popular (e.g., discharge summaries) and unpopular (e.g., order entry) components together, keeping software intuitive and user friendly, and providing real improvement in patient care. Dr. Fletcher noted that a tipping point in adoption comes after about 60-70 percent of doctors are using HIT.


He then showed a series of screen shots of VistA, including notes and reminders, and described how they are used in care and quality improvement processes. He also showed comparative data on provider performance and described their use in performance improvement strategies. For example, data are aggregated by provider group and the providers are rank-ordered, by name. (He noted that embarrassment works as a motivator, though “pay for performance is even better.” ) The VA system also uses competition and comparison between its 22 regions to motivate quality improvement, as well as comparisons with the data from other systems such as Medicare. The system permits the creation of automatic reports, which help to improve patient care. Dr. Fletcher noted that with EHRs it is possible to examine data over time and for large numbers of patients, showing patterns and changes in health indices. He described uses of aggregate data for research and care improvement.


Patients have access to their data through the MyHealtheVet program. (He noted that none of the patients in the VA system in New Orleans lost their medical or pharmacy records.) He showed screenshots of the patient portal and described the benefits of this system for patients and their clinicians. For example, patients have their own reminder systems; and they can record their symptoms or vital signs (e.g., blood pressure) on a daily basis, which is critical to managing certain conditions. Physicians can see this information from a remote location as patients move around the country. In addition, the enriched data, when aggregated, permit better research. Besides health improvement, Dr. Fletcher said, the result is that “we are changing medical knowledge.”



Asked how the VA decides what to focus on within the huge volume of data it collects, Dr. Fletcher said it looks for performance measures that connect to logical improvement in health. The performance measures are set centrally by a committee of physicians. The VISN (service region) directors are responsible for the performance of their regions. The targets continually rise. Ultimately, the targets and measures are evaluated in terms of their benefit for patients. He added that the record itself was defined by physician user groups. Mr. Hungate commented that this is a good model of clinician-controlled performance measures. Asked how many measures are in a set of measures on target conditions, Dr. Fletcher said there are about 50 measures, only some of which apply to any given patient. The reminder system identifies the operative conditions to be monitored for each patient.


Asked about modular vs. narrative fields in the charts, Dr. Fletcher said the note is customized to the individual doctor’s preferences. Dr. Ortiz explained that there are structured components such as problem lists, and doctors can build individualized templates, which can also be shared with others.


Asked to describe the process in which physicians are shown the performance data and discuss it, Dr. Fletcher said reports are given out to the doctors, who then discuss them at regular (~weekly) meetings. Dr. Carr pointed out the number of factors involved in quality improvement, as shown in the VA example, including not just the EHR and serial data but aggregation, decision support, analysis and, finally, interaction with clinicians.


Ms. McCall asked Dr. Geerlofs a series of questions to try and get at the management functions and processes involved in the use of data by Allscripts’ clients to drive quality improvement. Dr. Geerlofs said that once clinicians decide on a particular disease focus and objective, they can use the health management plans to generate aggregated reports that track performance over time. Most clinicians tend to start by using “canned reports”; the more sophisticated organizations eventually figure out how to use the tools more creatively. He added that his company is trying to collect information on best practices and share them with other clients. He suggested that the Committee consider helping to increase the national discussion about the best-practices ways of using EHRs to improve quality. He agreed with a comment by Dr. Rucker that “typical customers” can’t be expected to “think this up from scratch”; they need help to identify the most important content.


Mr. Hungate asked the panelists to comment on the design and implementation pitfalls to watch out for in the effort to use EHRs for quality improvement, given that it takes time to get a product right. Dr. Rucker focused on the pitfalls of too much external pressure on medical practices to adopt technology they are not ready for, which would create a backlash. Dr. Geerlofs pointed to the EHR certification initiative as promising, along with a “coalescence of functionality” in recent years that he believes will lead in the near future to innovative uses of content. He believes systems are getting better and better and that the tipping point for the tools to transform health care is not far off.


After noting that the VA bears little resemblance to “a typical small practice,” Dr. Ortiz said VistA has not been strong at practice management and it remains to be seen whether its rollout through the public domain will be successful. However, it is a tried and true system, and practices can avoid upfront costs by adopting it. The VA has a few pilot projects now on implementing VistA in public clinics.


Mr. Hungate expressed concern about private health care institutions’ inability to share tools such as decision support, an inability he called “a productivity limiter.” Asked to comment, Dr. Huff first stressed that discussion of such issues needs to recognize the large difference between the inpatient and outpatient environments; for example, decision support is far more complicated in an inpatient environment. That said, he agreed that the lack of interoperability and inability to share is a constraint. One thing that happens is that people install a starter set and then create local terminologies, making it impossible or at least much more complicated to transfer knowledge. To avoid “the pain of not standardizing from the start,” it would be preferable to get people to be consistent from the outset.


Dr. Ortiz asked the panelists to recommend priorities for the Workgroup to move this area forward. He noted that the VA has been working on developing sharable decision support tools. Dr. Geerlofs pointed to the importance of having standard vocabularies at the core of EHRs. He added that his company, like other vendors, has to “serve two masters” —both the clinicians using the system and the need to interact with other systems and with payers— and this is not easy. Dr. Rucker noted the importance of verbs in health care (e.g., deliver, evaluate, move, cut). These are not typically part of standard medical vocabularies, yet this is where “the meat” is. (Dr. Fletcher did not respond because he had to leave the meeting early.)


Ms. McCall asked for comments on whether anyone is talking about the image of a market in which various types of discovery—for example, standard vocabularies—can be transferred from one organization to another in an open source manner, as a public good. Dr. Huff highlighted work, spearheaded by people at the VA, on developing a standard service interface (application programming interface, or API). This development has the potential to “truly commoditize and revolutionize” the way software modules are built so they are pluggable in a marketplace in which people adhere to this type of interface. This capability combined with standardized terminology could dramatically reduce the cost of the systems and make it possible to share innovation. Dr. Geerlofs agreed, and said Allscripts is starting to talk about the same notion. He believes the market will drive the push toward standardizing the API, and suggested that government exert a “gentle influence” and not try to mandate it.


In response to a question, Dr. Huff said IHC’s terminology base is similar to MedCin, and he affirmed that organizations can acquire entry-level terminologies rather than building them from scratch as IHC has done. Market forces are creating them, especially for the outpatient environment; there is less commonality and sharing for inpatient settings. Dr. Rucker commented on the complexity of inpatient order systems and cautioned the Workgroup against oversimplification.




At Ms. McCall’s request, individual participants identified the highpoints of the hearing for them, as follows[1]:


Dr. Fitzmaurice:

  • Templates can be useful in producing quality measures, but terminology and vocabulary are complicated and decision support isn’t easy.
  • It is not easy for data to move from one system to another, primarily because of vocabulary.
  • (See further comments from Dr. Fitzmaurice below.)


Dr. Scanlon:

  • The optimism of the morning panel is encouraging.
  • The afternoon panel left him more concerned; the presentations point to the need for flexibility and dynamism, which must be encouraged but may be difficult to achieve.


Ms. McCall:

  • Massive translation exercises are needed for both vocabulary and decision support.
  • The field needs to figure out how to go from an indicator to a measure.
  • Analysis is needed about how to transfer discovery and knowledge into metrics and findings.


Mr. Hungate:

  • Cultural factors and resistance to change are more serious issues than the content of the EHR.
  • Dr. Rucker told him he questions whether the emphasis on vocabulary is appropriate, given that this is not a priority for clinicians.


Ms. Jackson:

  • Everything is moving at warp speed; much has changed in the last 3-5 years. That means the next 2-3 years, the design stage, will make a pivotal difference.
  • She is concerned about “falling over the rocks while looking at the vision.”
  • There are major differences between the ambulatory and inpatient care environments.


Dr. Carr:

  • It takes a complete package to deliver quality. It is thus important to begin with a sense and roadmap of what clinicians believe quality to be.
  • The quality package includes not just electronic data capture but also integration of the patient medical record with reminders and decision support, all integrated into the flow of things. To this is added data manipulation and aggregation, trend evaluation, data display and drill-down — all necessary components.
  • The sense of really improving care is palpable in the VA system, which has the above components plus trending, giving the data to someone who is accountable, consequences, and expectations related to a vision of quality.
  • The VA system is also a compelling example of interoperability across an entire system.


Dr. Holmes:

  • The two panels represented two contrasting approaches to NCVHS priorities:

o        The morning panel was general, policy oriented and focused on population health;

o        The afternoon group was very detailed and focused on individual health outcomes.

  • This raises the question of whether, or how, to connect the two perspectives.


Ms. Greenberg:

  • There is evidence of positive views about the role the federal government is playing in moving things forward.
  • She feels less optimistic about flexibility and interoperability, without which EHRs will only improve care in individual silos that are unable to talk to each other.
  • The case for EHRs and quality improvement was made more clearly than ever through the IHC and VA examples. This evidence needs to get into the literature directed at consumers as well as the scientific literature. Does the Committee have a role to play in influencing consumers?
  • She agrees with Dr. Lansky that the real role of NCVHS is to address the capacity of NHIN and EHRs to measure and improve population health. No one else is looking at the population issues.


Ms. Kanaan:

  • The contextual issues raised by the first panel provide the frame of reference for the Workgroup and should be the starting point for whatever documents it produces.
  • The members of the first panel posed an important challenge to the Workgroup: to figure out how to implement a truly patient-centered approach.


Dr. Ortiz:

  • To move forward with the quality agenda, it is critical to be able to share data.
  • We need to do a better job of determining what constitutes good quality, beyond the traditional 20 to 30 measures in general use. Then we need to figure out how to collect it efficiently, using electronic tools.
  • Incentives need to be aligned to improve quality of care; until that happens, it will be difficult to get people to do the right thing.
  • We need to learn to share medical knowledge and decision support tools, to realize the benefits EHRs can deliver.


Dr. Janes:

  • Standards were a major theme, related to terminologies and vocabularies. The presentations were a sobering reminder that despite all the work in this area, much more is needed. Even with standard terminologies, the systems cannot talk to each other.
  • API looks promising, but it is still at a developmental stage. Is this something the Workgroup might want to think more about?
  • Although there has been progress in identifying and standardizing quality measures (e.g., HEDIS measures), most still do not describe the more complex aspects of medical care. We need to move beyond that.
  • Today’s terminologies and vocabularies are not well suited to moving ahead in quality assessment.



Finally, with just a few minutes left, the Workgroup discussed its immediate next steps. They agreed on the following:

1)     All the speakers’ slides will be distributed to the members promptly, as a summary of this meeting.

2)     The Workgroup will hold a conference call in early December.

3)     Members will circulate their three wishes for Workgroup priorities in advance of the conference call.


Ms. Greenberg commented that in the future, the Workgroup will need to consider these questions:

1)     Are site visits needed?

2)     Are there other people the Workgroup needs to hear from?

3)     Is the next step to pull together provisional recommendations and vet them widely, perhaps through regional hearings?


Dr. Fitzmaurice summarized the following themes and questions from the day’s discussions. He noted that taken together, they call for research, direction to existing contracts, and incentives.

1)     Focused research is needed on good quality measures.

2)     How well do the good measures scale up to population health measures?

3)     What do consumers want, and do higher quality measures give it to them?

4)     The NHIN and the standards harmonization and CCHIP contracts should emphasize terminologies and vocabularies for interoperability solutions. Pilots are needed, for example to determine whether there should be mapping among the terminologies or a focus on choosing single vocabularies.

5)     What incentives can be developed and put in place?


Mr. Hungate then adjourned the meeting.



I hereby certify that, to the best of my knowledge, the foregoing summary of minutes is accurate and complete.



Chair                                                                              Date


[1] There was a great deal of agreement about many themes (notably, the unexpected optimism), but points that are reiterated by other participants are not repeated here.