The Urgent Need to Measure Patient Safety

thomas r. krause is the founder and former chairman of BST, a consulting firm that advises corporate and government clients on workplace safety.

Published October 19, 2015.

 

You've seen the astounding numbers: hundreds of thousands of Americans die each year due to medical treatment errors. Indeed, the median credible estimate is 350,000, more than U.S. combat deaths in all of World War II. If you measure the "value of life" the way economists and federal agencies do it – that is, by observing how much individuals voluntarily pay in daily life to reduce the risk of accidental death – those 350,000 lives represent a loss exceeding $3 trillion, or one-sixth of GDP. But when decades pass and little seems to change, even these figures lose their power to shock, and the public is inclined to focus its outrage on apparently more tractable problems.

In fact, there is little doubt that patient safety could be significantly improved. But the first crucial step to stopping the scourge of health care errors is seemingly the most mundane (and elusive): building a national measurement system that accurately tracks patient harm. With such a system in place, researchers would be far better equipped to test how well various safety initiatives would work in thousands of health care environments, to identify the weak links in hospital safety systems, and to pin down how insurers and regulators might best use their leverage to minimize patient risk.

As a reporter for Modern Healthcare summed up last year:

There is agreement that significant progress has been made on some fronts. But problems remain in many areas, due to a wide range of unproven interventions and inadequate performance metrics. Some clinical leaders doubt hospital safety is much better than it was 15 years ago when the Institute of Medicine issued a landmark report that helped launch the patient-safety movement.


         Scott Roberts

 

The Challenge

My own expertise lies in advising enterprises in a range of industries, including chemicals, mining and transportation, on strategies for reducing accidents. Successes have far outnumbered the failures. But by virtue of their size, rapidly moving technology and organizational complexity, health care safety issues are in a league of their own.

The problem is not a lack of statistics, but how to compare and winnow them for clarity and compatibility. Currently, hospitals employ small armies to track myriad patient safety and care-quality statistics, reporting them to a wide range of entities. The process, though, is costly, inefficient and often ineffective.

Current estimates suggest the rate of adverse events in the health care setting is higher than in any other. But as alarming a reality as that is, it is even more alarming that we have only a hazy sense of the parameters of the problem because we lack sufficiently robust data to quantify it.

The Institute of Medicine report called on Congress to monitor safety throughout the U.S. health care system. At the heart of the monitoring would be a nationwide mandatory reporting system in which state governments collected information on adverse medical events that resulted in death or serious harm. But the emphasis here is necessarily on “nationwide.” Without those numbers, the only way patient safety as a whole can currently be evaluated is through laborious, problematic studies that sample provider rec-ords to dig out the number of adverse events.

The failure to assay harm this way is unique to health care safety. Safety on our highways, in our homes and recreation areas, and in virtually all other industries and government agencies is quantified and reported. The data serve as an anchor for prevention research, intervention design and evaluation of improvement strategies.

Health care administrators may be aware of the results of specific projects that address selected types of harm and typically maintain massive dashboards of indicators of their own. But the Balkanization of the process – the decentralization of decisions about what should be measured, who should do the measuring and who should have access to the data – hampers the design of effective strategies for making patients safer.

Consider again the lack of consensus on that most basic concept, the number of fatal adverse events. The Institute of Medicine estimated that in 1997 between 44,000 and 98,000 patients died as the result of medical errors. In spite of initial skepticism, based on the belief that the IOM had overstated the problem, these data are widely cited today. Indeed, more recent studies by Christopher Landrigan and others in the New England Journal of Medicine and John James in the Journal of Patient Safety as well as by the U.S. Department of Health and Human Services’ Office of the Inspector General estimate that the toll is far larger than the IOM numbers. These later estimates range from 130,000 to 180,000 deaths in hospitals from preventable adverse events and up to 480,000 deaths in all health care settings.

Such wildly varying estimates – based on different definitions of such key terms as "error" and "harm" – have limited value in tracking progress (or lack thereof) or in enabling planning across localities and organizations. Note, moreover, that such planning has become a major feature of the rapidly evolving health care industry as it responds to enormous pressure to get more bang for a taxpayer's and private insuree's buck.

What’s Happened So Far

In the 15 years since the IOM report focused attention on measurement, there has been some movement forward. The Medical Errors Reduction Act of 2000 financed projects to evaluate strategies for reporting errors. The Patient Safety and Quality Improvement Act of 2005 removed some disincentives for accurate reporting by protecting the identity of health care providers who report mistakes. Ten years later, however, problems still abound: not all states report, not all hospitals are required to participate and, as I've made clear, there is no consistent measure of adverse events, preventable or not, that are permanently disabling, life-threatening or fatal.

In 2006, the Institute of Medicine did recommend options for a national performance measurement system. And in 2012, the Centers for Medicare and Medicaid Services (CMS) began implementing a pay-for-performance scheme based on mandatory reporting of specific claims events. That has provided greater transparency, with metrics on hospital performance available on the Department of Health and Human Services' website. Indeed, the Compare website gives consumers an enormous amount of information about local hospitals – and presumably strengthens the hospitals' incentives to shape up.

Two other government agencies, the Centers for Disease Control and Prevention (CDC) and the Health and Human Services' Agency for Healthcare Research and Quality (AHRQ), are also involved in pressing for better data. But none of these efforts, it should be emphasized, has resulted in a national measurement system.

Health Care Organizations

Not all efforts have been initiated by public agencies. The Leapfrog group, an organization representing a mix of companies providing health insurance to their employees, used 28 measures from the CDC, CMS and AHRQ, along with its own survey of hospitals, to develop a publicly available composite patient safety index. Since survey participation is voluntary, however, scores are not tracked for every state. Moreover, the group offers no nationwide measure, and there is no way of estimating total harm to patients.

Another significant contribution came from the National Coordinating Council for Medication Error Reporting and Prevention, a group of health care stakeholders as varied as its alphabet-soup acronym is long. It developed an index for categorizing medication errors according to severity. Although targeted at medication errors, the NCC MERP severity index has seen broad application in research and adverse event reporting.

The Institute for Healthcare Improvement, a nonprofit funded by a variety of companies and foundations, developed the Global Trigger Tool, a detailed set of instructions for retrospectively measuring adverse events that facilitate the use of patient records for research on health care-associated errors. This is an important innovation, but it leaves individual institutions with the optional task of periodic, time-consuming, resource-intensive research to dig out events.

What Health Care Can Learn From Occupational Safety

Plainly, health care insurers and providers have gotten the message that error containment must be a high priority. And, equally plainly, a host of initiatives are making a difference – but one limited by the lack of a comprehensive national database. It thus makes some sense to see what has worked on safety data in other fields.

The Occupational Safety and Health Act of 1970 was a game changer. Before that, there was no national standard for reporting occupational injuries. OSHA drives two levels of measurement with common metrics, setting record-keeping standards at the organizational level and monitoring compliance with the standards. Meanwhile, the Bureau of Labor Statistics collects, compiles and analyzes statistics for the nation.

Reporting criteria are relatively simple and provide enough information to gauge the frequency and severity of harm. OSHA audits the records for compliance, while individual organizations use them for internal measurement and benchmarking.

The BLS's Annual Survey of Occupational Injuries and Illnesses provides a national measure of workplace safety. Each year, approximately 200,000 organizations submit their OSHA logs, along with employment information (used as a denominator to calculate injury and illness rates) and demographic information. It's important to note that the BLS survey is confidential.

The BLS and OSHA measurement systems use common definitions and rate calculations. These enable comparisons across organizations, locations and industries. By the same token, OSHA's record-keeping standards for employers facilitate the BLS annual survey process. The impact of a reliable metric that's nationally consistent (supplemented, of course, with regulation that is enforced) has been significant.

From 1940 to 1970 (pre-OSHA), work-related fatal injury rates fell by half, reflecting a mix of regulation and employer-initiated safety efforts. One might have expected the rate of improvement to fall thereafter because the initial gains were presumably the easiest. But that has not been the case. From 1970 to 2000, after the creation of OSHA and its record-keeping requirements, fatal injuries fell by three-quarters. Admittedly, it’s hard to disprove the counterfactual that rates would have fallen as much without the rise of national record-keeping that gave regulators easier ways to compare industries, localities and individual plants. But most safety experts give a lot of credit to the reform.

What Must Be Done

Financing study after study after study in order to learn the frequency of adverse events has proved ineffective because variation in methods, sampling and scope make it impossible to compare numbers across time and geography. One could argue that some individual hospitals already measure what they need to drive improvement within their facilities. But patient safety is not an individual hospital issue. A federally mandated measure would allow more accurate comparisons that facilitated both competition to improve institutional performance and the information needed to formulate more effective regulation of health care safety.

The building blocks for this national system already exist. The NCC MERP system for classifying harm from medication errors could be adapted for a broad range of adverse events generally. The starting point could be just three measures organized by degree of harm using the NCC MERP classification system:

Fatality Rate. Number of fatalities related to adverse events, divided by hospitalized days.

Serious Injury Rate. Number of adverse events causing temporary or permanent harm and those requiring additional hospitalization, divided by hospitalized days.

Reportable Injury Rate. Number of adverse events causing any harm to a patient, divided by hospitalized days.

As a practical matter, data collection should shield data providers from fallout; without confidentiality, reporting is bound to be distorted. U.S. health care is rife with finger pointing, spurred on by malpractice lawyers and insurers eager to minimize liability. Data collection and analysis shouldn't serve to deepen the sins of the extremely inefficient legal liability system or allow that system to compromise its accuracy.

As important, reporting needs to be mandatory. Since the IOM initially called for mandatory reporting, there has been considerable debate over the relative efficacy of mandatory versus voluntary reporting systems. I think too much is at stake – and the incentives for obfuscating bad news too great – to depend on health care organizations to be pressured by public opinion or consumer approbation into reporting (and reporting accurately).

Seizing the Day

The reality that it's difficult to build a comprehensive error-reporting system for America that readily permits comparison between localities, methods of organization, incentive structures and the like should not be surprising. The sheer size, degree of interest-group conflict and decentralization of the health care establishment make virtually any change exceptionally difficult.

But, by the same token, the potential rewards have never been greater. Digitization makes the management and analysis of vast amounts of data far easier. And as the population ages, the stakes in delivering care safely as well as efficiently will inevitably grow.

As the record of countless experiments in improving patient safety shows, the problem is hardly being ignored. However, hard work alone does not ensure that care organizations are expending effort on the right things.

Moreover, the current culture in health care does not support comprehensive reporting. Unlike other (generally less dangerous) industries, there is no obligation to report. So we can assume that incidents are grossly underreported – but just where and why, we don't know. Note, too, that as health care itself changes, a delivery model that is centered on physicians is becoming a growing impediment. In reality, our health care system is no longer focused on single physicians but on groups of caregivers. And the gaps in patient error-reporting mirror the gaps in recordkeeping as patients negotiate the tortuous paths of modern care.

We have the technical capacity to create a comprehensive error-reporting system and to make good use of it. The question now is whether we have the will.

main topic: Public Health
related topics: Public Health, Workforce