Medindia LOGIN REGISTER
Medindia

Experts Say Rating Hospital Quality Means Asking the Right Questions

by Sheela Philomena on Jun 15 2011 3:31 PM

Researchers from John Hopkins University argue that more attention need to be paid to the quality of measurement tools used grading hospitals.

 Experts Say Rating Hospital Quality Means Asking the Right Questions
Researchers from John Hopkins University argue that more attention needs to be paid to the quality of measurement tools used in grading hospitals.
The science of outcomes reporting is young and lags behind the desire to publically report adverse medical outcomes, write Elliott R. Haut, M.D., an associate professor of surgery at the Johns Hopkins University School of Medicine, and Peter J. Pronovost, M.D., Ph.D., a Johns Hopkins professor of anesthesiology and critical care medicine, in the June 15 Journal of the American Medical Association.

"Everyone wants to know, 'What is the best hospital?' 'Where should I have my surgery?'" Haut says. "People want to compare hospitals, but if the science can't keep up, maybe we're doing more harm than good when we report certain kinds of data. It raises a different question: Are the numbers being reported meaningful?"

The researchers say an important source of error in some currently reported outcome measures is something called "surveillance bias," which essentially means that "the more you look, the more you find." Take the problem of deep venous thrombosis (DVT), a clot deep inside a part of the body that can block blood flow and cause swelling and pain. If the clot breaks off, it becomes a pulmonary embolism (PE) and can get stuck in the heart or lungs and kill the patient.

One key to stopping DVT from becoming deadly is to prevent it or find it early and treat it. So the more tests done for DVT, the higher the DVT rate for a hospital. If a hospital has a high DVT rate, Haut says, is it a place a patient should avoid? Or is it a place that looks for DVT more aggressively before any symptoms appear and prevents DVT from progressing to a much more serious complication? Therefore, reporting a DVT rate, he says, doesn't tell much about hospital quality, since it doesn't delineate whether the hospital is ignoring a potential complication or successfully preventing one.

"Without a standard way for looking for these complications, the numbers people are looking at and making major decisions based on may be worthless," Haut says.

The Johns Hopkins Hospital actively looks for DVT in most trauma patients who are at high risk for these potentially life-threatening clots. These patients are given the blood thinner heparin, their legs are wrapped in automatic compression devices to keep blood moving and they are given regular ultrasounds to look for clots before symptoms can even appear.

Advertisement
Not long ago, Haut remembers, Maryland state regulators asked Johns Hopkins why it had the highest rate of DVT in Maryland. "The question was, essentially, why are you doing such a bad job?" he recalls. Hopkins officials went back and realized the high rates were likely because of surveillance bias. "If you look more you may find more, but you can also treat DVT early before it becomes a major problem and kills you," he says. The lesson: "It might be OK to have a higher rate."

Some hospitals don't routinely screen for DVT so their DVT rates are lower. They might also have a higher rate of complications from DVT that isn't caught early on, Haut suggests.

Advertisement
The issue isn't simply one of giving misleading information to the public about hospital quality, Haut says. The Centers for Medicare & Medicaid Services has said it will no longer pay the expenses associated with treating patients who develop DVT or PE after certain orthopedic surgeries, calling such complications "never events."

Haut calls it a "perverse incentive." If hospitals don't look for a DVT and don't find one, they will still get paid, but if other hospitals aggressively look for DVTs and find them, they won't get paid.

"There is broad bipartisan and public support for measuring outcomes, yet these measurements must be made accurately, guided by principles of measurement from clinical research. To do otherwise would be reckless and unjust," Haut and Pronovost write. "Which outcomes to evaluate must be determined and then they must be measured accurately, rather than squandering resources on measuring many outcomes inaccurately."

Haut and Pronovost argue that several steps can help reduce the errors caused by surveillance bias. First, those developing and reviewing outcome measures should ensure that the methods for surveillance are made clear and standardized among hospitals, so that comparisons are apples to apples. Second, a cost-benefit analysis must take place to determine which specific measures should be mandated. Third, they suggest, perhaps some of the outcomes measured may not be telling the whole story about preventable harm. Instead, it might be better to look at the processes involved in reaching an outcome.

For example, instead of reporting how many of a hospital's trauma patients get a DVT, a better way to judge the quality of care at the hospital might be to ask if a patient got the proper DVT prophylaxis, the right medicines and/or therapies to prevent the adverse event. That, they say, would paint a clearer picture of the quality of a hospital.

"You have to make sure your measures are fair and that the benefits of reporting adverse outcomes outweigh the risks of unfairly harming hospitals, because these measures have unintended consequences," Haut says.

Source-Eurekalert


Advertisement