In conversation, in print, and on this blog, skeptics often question the validity of anecdotal evidence when it comes to health care. “Evidenced-based medicine” is the widely accepted standard of determining if a health care treatment is effective or not. Unless a treatment has undergone the scrutiny of a “randomized control trial,” personal testimony of one’s own health experience is often discounted. Yet the following post indicates that this standard is under fire within the medical community.
Like many of us, I’m interested in learning about efforts to improve health care in America. One method in my home state of Washington began in 2006, with the creation of the Health Technology Assessment (HTA) committee. The primary purpose of the eleven member group is to “ensure medical treatments and services paid for with state health care dollars are safe and proven to work.”
Since the committee’s inception, it has ruled on 21 procedures and denied coverage to about half of them. The decisions to date are expected to save the state approximately $32 million annually–but the savings have not come without controversy.
The HTA committee relies on “scientific evidence and a committee of practicing clinicians” to base its decisions. Sounds reasonable, but recent high-profile articles and other commentaries say that defining “scientific evidence” in assessing health care treatments leaves room for ambiguity.
Despite maintaining a unique, open process as part of its charter, the HTA had not attracted national attention until recently when, in March, it was the topic of articles in both The New York Times and The Wall Street Journal.
The March 22 editorial in The Wall Street Journal noted, “The most compelling reason to be worried about comparative effectiveness research is simple. Randomized trials are designed to find average results over large groups of people, but doctors do not treat averages. They care for individuals, and what works for the typical patient may not work for you…”
(Randomized controlled trials compare how one group responds to a treatment against how an identical group fares without the treatment.)
A June 17 article in The Seattle Times further emphasizes the point: “the committee puts too much emphasis on randomized studies, when they may not be that good – or even exist – and don’t reflect what doctors see in their practices.”
More surprising is the recent work, profiled in The Atlantic, of Dr. John Ioannidis, professor of medicine and director of the Stanford Prevention Research Center at the Stanford University School of Medicine. His exhaustive research continues to conclude that trusting the scientific validity of medical studies is often misplaced. Randomized trials are considered the gold standard, yet in different fields of medicine, Ioannidis found 25 percent to be wrong and 80 percent of non-randomized studies were also incorrect.
He goes so far in the article to note, “as much as 90% of the published medical information that doctors rely on is flawed.” Dr. Ioannidis is one of the world’s foremost experts on the credibility of medical research.
Clifford Saron, a neuroscientist at the University of California at Davis, is quoted in this month’s Atlantic as saying:
“We have to be careful about allowing presumed objective scientific methods to trump all aspects of human experience.”
If the current standard of evaluating health care treatments is proving problematic, perhaps a new approach is needed?
Article first appeared as “Who Knew that Standing Firm on ‘Scientific Evidence’ Could Be So Controversial” on Blogcritics.org.