Quality Initiative and Perspective from Intermountain Health Care
Brent C. James, M.D., M.Stat.
Executive Director, Institute for Health Care Delivery Research
Intermountain Health Care
Salt Lake City, Utah, USA
Humphrey Building Room 705A, 200 Independence Ave SW, Washington, DC
Thursday, 2 June 2005 — 9:20a – 10:20a
National Committee on Vital and Health Statistics
Workgroup on Quality
Legal sanctions
Malpractice tort actions
Professional or social shame
Licensing,
credentialing/privileging
Selection Prioritzation
(motivation)
Process
operation and
management
Improvement:
hypothesis
generation
and testing
Payment
(for quality)
Focus on the Person
(individual or institution)
Comparative data for
Accountability
Data for
Learning
Focus on the Process
Judgment, e.g.
1. Aim defines the system (W. Edwards Deming)
Aim:
Accurate ranking Noise reduction
(improved signal / noise ratio)
Reference: Berwick, D.M., James, B.C., and Coye, M. The connections between quality
measurement and improvement. Medical Care 2003; 41(1):I30-39 (Jan).
General reference: Institute of Medicine Committee on Data Standards for Patient Safety. Patient
Safety: Achieving a New Standard of Care. Aspden, Philip, Corrigan, Janet M.,
Wolcott, Julie, and Erickson, Shari M., editors. Washington, DC: National
Academy Press (www.nap.edu), 2001 (Nov 20); Chapter 8 (pp. 250-278).
Differences in Patients
Individual anatomy, physiology, biochemistry, and genetics
Burden of disease (presence, expression, and severity of comorbid illnesses)
Response to treatment
Differences in Treatment (Performance)
Availability of resources (tests and treatments)
Health promotion / disease prevention
Problem / opportunity identification (complete and accurate diagnosis)
Selection of all appropriate interventions (referral & treatment indications; everything that works but only what works)
Execution (of tests and treatments)
Patient relationships (attentiveness; information transfer; shared decision making; dignity & respect)
Differences in Results
Medical outcomes
– appropriateness
– complications (process failures / defects)
– therapeutic goals
– patient perceptions of outcomes (functional status)
Service outcomes
– clinician-patient relationship
– access / convenience
Cost outcomes
Preferences and beliefs
Ability to participate in own treatment (e.g., educational level; interest and engagement)
Access to resources
Outcomes assessment
Differences in Patients
Individual anatomy, physiology, biochemistry, and genetics
Burden of disease (presence, expression, and severity of comorbid illnesses)
Response to treatment
Differences in Treatment (Performance)
Availability of resources (tests and treatments)
Health promotion / disease prevention
Problem / opportunity identification (complete and accurate diagnosis)
Selection of all appropriate interventions (referral & treatment indications; everything that works but only what works)
Execution (of tests and treatments)
Patient relationships (attentiveness; information transfer; shared decision making; dignity & respect)
Differences in Results
Medical outcomes
– appropriateness
– complications (process failures / defects)
– therapeutic goals
– patient perceptions of outcomes (functional status)
Service outcomes
– clinician-patient relationship
– access / convenience
Cost outcomes
Preferences and beliefs
Ability to participate in own treatment (e.g., educational level; interest and engagement)
Access to resources
2. Outcomes assessment gone bad
Differences in
Measurement
Completeness
Accuracy
Timeliness
Science
(all necessary data elements known?)
No Yes
Measure selection
(were all major known factors
included in the measure set?)
No Yes
Patient assessment
(was all measures clinically
assessed for this patient?)
No Yes
Documentation
(were all measures recorded
in the patient record?)
No Yes
Abstraction
(were the measures extracted
from the patient record?)
No Yes
Complete? (including sequencing; thoroughness vs. convenience; specialization / aggregation issues)
Accurate? (completely defined, w/ coding etc.; stringent case identification; audit systems)
Prioritized? (some factors have much greater effect on outcomes than others)
Timely?
Analysis &
Reporting
Measurement chain
Most “generic” accountability
systems cannot rank accurately!
Typical positive predictive values in the range of
0.25 – 0.40
(but still statistically significant)
Jan 97FebMarAprMayJunJulAugSepOctNovDecJan 98FebMarAprMayJunJulAugSepOctNovDecJan 99FebMarAprMayJunJul
Month
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Cesarean Delivery Rate
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
cases: 4,779controls: 28,872
Hospital: XXXXXX
Compared to: Other hospitals w/o NICU (center line) YTD: 0.1922
3. What does “outlier” mean?
Overall C-section Rate
“Outlier” means
that with careful analysis, you can
probably find a true root cause
At IHC,
across thousands of comparisons that found
hundreds of outliers, based on
carefully designed clinical data systems (not just existing administrative data)
more than half of all root causes turned out to be
data system failures, not care delivery problems
Conclusion: even a well-designed clinical outcome
system requires improvement feedback, to find and correct
unrecognized problems within the data system itself
4. Good outcomes systems tend to
1. Focus on a single condition (clinical process)
2. Collect carefully-selected clinical data
(not rely only on existing administrative data, for convenience)
3. Use intermediate as well as “final” outcomes
(which can greatly increase sample size and shorten assessment timelines)
Outcomes chain: diabetes
diet exercise
hypoglycemics
DCCT, UKCDS trials
Blood sugar levels
DM-associated
mortality and morbidity
Outcomes chains (Eddy’s “causal chains”)
Tracks the hierarchical elements of a
process/outcomes structure
(the Japanese “Five Whys”)
(Nelson’s concept of “drill down”)
(also known as process steps)
Down to the level of actual decisions or
behavior — the only place at which change is possible
Through a series of “intermediate outcomes”
From a “final outcome”
Issue: breaks in the outcome chain
MORTALITY
BLINDNESS COMA DKA CVA MI
PANCREATITIS
MICROVASCULAR HYPOGLYCEMIA CARDIOVASCULAR
TRANSPLANT
OR
DIALYSIS
Rx ACE
MICRO
ALBUMINURIA
FOOT ULCER
FOOT AND
SENSORY EXAM
RETINOPATHY
ANNUAL
RETINAL EXAM
DIABETES
OUTCOME CHAIN
HYPOGLYCEMIA LDL
TRIGLYCERIDE
ADJUSTMENT
REACTION /
FAM PLANNING
A1C DIET EXERCISE BP WEIGHT MD VISITS
DEPRESSION
PREGNANCY
NEUROPATHY RETINOPATHY LOW SUGAR BP LIPID RELATED
GIT TYPE I TYPE II GESTATIONAL
STRATIFICATION
ORAL MEDS INSULIN FBS
BP >
135/85
Rx
ACE
Rx STATIN or
GEMFIBROZIL
UREMIA
LASER Rx
LEVEL 8 – COMPLICATIONS
LEVEL 7 – COMPLICATIONS
LEVEL 6 – COMPLICATIONS
LEVEL 5 – COMPLICATIONS
LEVEL 4 – PREVENTION
LEVEL 2 – TREATMENT
LEVEL 1 – DIAGNOSIS and
LARRY V. STAKER M.D.
0 yrs
1 yr
2 yrs
5 yrs
10 yrs
20 yrs
30 yrs
DM ED DIETICIAN WEIGHT
REDUCTION TOBACCO ETOH SERVICE
QUALITY
COUNSELING
PSYCH SERV
B CONTROL
PREG
HOME GLU
MONITOR
LEVEL 3 – EDUCATION
HIGH
TRIG
ABD
PAIN
CABG
PTCA
CAD
ASVD
HIGH
BP
DEPRESSION /
COMPLICATED
DELIVERY
DEPRESSION /
HIGH RISK
PREGNANCY
1 yr
LVH
on
EKG
CHF
NEPHROPATHY
SUICIDE /
BIRTH RELATED
MORBIDITY
PREGNANCY
DEPRESSION
PROLIFERATIVE
SEIZURE
ER VISIT
ADMISSION
READMISSION
CELLULITIS
OSTEOMYELITIS
AMPUTATION KIDNEY
FAILURE
The key to an outcomes chain
is
the reliability of the links
Strong links allow appropriate substitution
of intermediate for end outcomes, often
massively increasing data rates while
shortening time lags
5. Cycle of Fear
Fear
Micromanage
Filter
the data
William W. Scherkenbach
Kill the
messenger
(denial;
shift the blame)
(game the system;
looking good is often far easier than being good)
(tampering)
Looking good (“gaming the system”)
denominator deflation (incomplete case finding)
numerator inflation (disconnected intermediates)
essential need for independent external audit
(ala NCQA HEDIS measures)
Legal sanctions
Malpractice tort actions
Professional or social
shame
Licensing,
credentialing/privileging
Selection Prioritzation
(motivation)
Process
operation and
management
Improvement:
hypothesis
generation
and testing
Payment
(for quality)
Focus on the Person
(individual or institution)
Comparative data for
Accountability
Data for
Learning
Focus on the Process
Judgment, e.g.
Redux: Aim defines the system
Aim:
Accurate ranking Noise reduction
(improved signal / noise ratio)
Demands very accurate data Tolerates “dirty” data
Data system:
High – usually an unfunded mandate
(competes for quality resources)
Low – Integrated into clinical workflow
(essential for care delivery)
Clinical data burden:
Often relies on existing (administrative claims) data
(as a matter of expediency)
Demands process-specific clinical data
Cannot generate process
management measures
Can generate (“roll up”)
accountability measues
6. Building an outcomes system
Pick a high-priority process 1.
usually unconscious, but can’t practice, measure, or analyze without one
taps fundamental knowledge; can improve model, as well as practice
generates consensus
Build a conceptual model 2. (e.g., conceptual flow chart, cause &
effect diagram, outcomes chain)
Generate a list of desired reports 3.
use conceptual model plus outcomes heuristic
format: annotated run charts / SPC charts
test with target end users
Generate a list of data elements 4.
use list of desired reports; think numerators and denominators
format: coding manual –> self-coding data sheets
test (crosswalk) final self-coding data sheets against report list
test manually, at front lines
Negotiate what you want with what you have 5.
identify data sources for each element: existing/new, automated/manual
consider value of final report vs. cost of getting necessary data
Plan data flow, program analytic routines (EDW datamart design) 6.
Test final system 7. Reference: James, B.C. Information system concepts for quality
measurement. Medical Care 2003; 41(1):I71-78 (Jan).