Page 42 - Hall et al (2015) Principles of Critical Care-McGraw-Hill
P. 42
CHAPTER 2: Measuring Quality 11
that antibiotics should be administered in less than 4 hours for patients From this equation, the rationale for using risk-adjusted outcome
with CAP, which was endorsed by the Infectious Diseases Society of rates is clear. By controlling the variation due to case mix and express-
America (IDSA), and later by the National Quality Forum, the Joint ing the effects of chance, these models attempt to expose the residual
27
Commission, and the Centers for Medicare & Medicaid Services. This unexplained variation, which is attributable to quality of care. This
measure has since been publicly reported for all US hospitals, which leads naturally to the ranking of hospitals according to risk-adjusted
drove some hospitals to adopt policies mandating antibiotic administra- mortality rates with an implied correlation with quality of care. From
tion even before chest radiographs were obtained. The imposition was the above model, it is clear that these assumptions are overly simplistic.
28
followed by several studies challenging the quality indicator: One study Differences in the definitions and quality of data can lead to differential
observed that 22% of patients with CAP had uncertain presentations bias and upcoding of severity of illness. Despite using protocolized
(often lacking infiltrates on chest radiography), where delayed antibiot- data collection, measures of case mix, even in critical care where they
ics would be appropriate ; other studies demonstrated that the 4-hour are highly evolved, are imperfect. Using data from Project IMPACT,
29
policy led to increased misdiagnosis of CAP, with concurrent increased a multicenter cohort of ICUs that carefully collects data on quality of
antibiotic use for patients who did not have CAP 30,31 ; more recently, care, Glance customized SAPS II and MPM II scoring systems and used
prospective cohorts have failed to demonstrate any association between it to rank 54 hospitals based on their risk-adjusted mortality. The two
early antibiotics and treatment failure for CAP. These unintended different scores led to differences in classification of 17 ICUs, including
32
consequences led the IDSA to revise their guidelines and exclude a fixed some that would be classified as low performers under one model, but as
time frame for antibiotic use, recommending that antibiotics be admin- high performers under the other model. The possibility of outlier mis-
40
istered as soon as a definitive diagnosis of CAP is made. 33 classification suggests that risk-adjustment models are poorly suited to
Risk-adjusted mortality is a common tool used to measure and bench- claim differences in quality of care. However, when using process-based
mark the quality of intensive care. This measurement can be thought of measurements, the sources of variability decrease considerably.
as a “test” to diagnose whether an ICU has high quality or not. We can
apply the same criteria of validity, reliability, chance, confounding, and Variance (Process of Care) = Variance (Definitions/Quality of Data
bias to see if the application of risk-adjusted mortality can be used to Acquisition) + Variance (Chance)
identify quality. Unfortunately, using simulations Hofer demonstrated + Variance (Secular Trends) (2-2)
that both sensitivities and positive predictive values are inadequate. The primary advantage with process measures of quality is that
Depending on the case mix, sensitivities would range from 8% to 10% they are relatively insensitive to case mix adjustment. This rests on the
(ie, approximately 90% of low performers would not be detected) and assumption that the rigorous data definitions can ensure that the popu-
positive predictive values would range from 16% to 24% (which means lation identified for process measure evaluation should indeed have the
that 76% to 84% of units classified as low performers would actually be process applied. Under this assumption, variations in process of care
average or high performers). Risk-adjusted mortality and its more com- should only be influenced by chance and secular trends.
34
monly reported version, the standardized mortality ratio, certainly have If we could control for all sources of variation in Equation (2-1), we
uses; however, the limitations of these measures are well documented. 35 would expect to observe a direct relationship between process of care
It is still unclear whether there is value in public reporting of quality and outcomes. That is, the better the process of care at any given unit,
measures in either driving the market to use high-quality centers or the better the outcomes should be. While this seems intuitive, sound
motivating quality improvement. It is clear that payers, governments, scientific evidence is lacking. Earlier work tried to assess quality of care
and consumers are likely to demand these reports in the future. The by a process called implicit review. When using this process, experts
41
challenge then becomes how to apply a rigorous methodology to the performed a qualitative review of medical records and assigned a qual-
data collection, implementation of changes, and analysis of effectiveness ity scale to the care received by each patient. Using this methodology,
both at the local and system levels.
Rubenstein et al could demonstrate a 40% to 200% increase in the rela-
tive risk of death for selected diagnosis associated with the measured
MODELS OF QUALITY quality of care. However, this methodology is obviously flawed. When
41
While there are many newer formulations, the classic model proposed experts are assessing the charts, they are not blinded to the outcomes
by Donabedian separated quality into three domains: structure, pro- and knowing whether a patient survived or not may influence their
36
cess, or outcome of health care, the rationale being that adequate struc- opinion on quality of care. The problem with this type of quality review
ture and process should lead to adequate outcomes ; however, this has was elegantly demonstrated by Caplan et al who queried 112 anesthesi-
37
not always been the case and in fact process and outcomes frequently do ologists regarding the appropriateness of care in 21 cases. In each case
not move in the same direction. 38 the outcome had been manipulated to demonstrate either permanent
Structure measures the attributes of the settings in which care occurs. disability or temporary disability. The study showed that the appropri-
This includes facilities, equipment, human resources, and organiza- ateness of care was assessed differently depending on the outcome. In
tional structure. Process measures what is actually done in providing cases with permanent disability, the reviewers reduced their rating of
care, including treatments, diagnostic tests, and all interactions with the appropriate care by 30% compared to the exact same clinical scenario
42
patient. Outcome measures attempt to describe the effects of care on the with temporary disability. This study raises significant doubts about
health status of patients and populations such as mortality and health- the validity of implicit expert review for quality when the reviewer
related quality of life. Broader definitions of outcome include improve- knows the outcome of care.
ments in the patient’s knowledge, behavior, and satisfaction with care. More recent work addresses quality of care with objective measure-
■ SOURCES OF VARIABILITY IN QUALITY MEASUREMENT ments of processes of care, and the links between process and out-
come are less clear. For example, a study of hospitals’ self-reports of
If we combine the above domains of structure, process, and outcomes structural and process measures of quality endorsed by the Leapfrog
38
with the methodological concepts described in the previous section, we Group was not associated with inpatient mortality. In a large study of
can summarize a model of quality of care that is influenced by the vari- 5791 patients with heart failure, an association between mortality and
ability of its different components (adapted from Lilford ): compliance with five process measurements endorsed by the American
39
Heart Association could not be demonstrated after risk adjustment.
Variance (Outcomes) = Variance (Definitions/Quality of Data) The process measure that came closest to demonstrating an association
+ Variance (Case Mix) + Variance (Chance) with mortality was also the one for which there is the most scientific
+ Variance (Secular Trends) + Variance evidence: the use of ACE inhibitor or ARB in patients with left ven-
(Quality of Structure and Process) (2-1) tricular dysfunction. 43
Section01.indd 11 1/22/2015 9:36:43 AM

