Every week, it seems, I come across a new story about data analysts proposing to mine data from either medical records or social media to identify potential new harmful drug side effects; often, it is asserted that the methodology could have been used to successfully identify, at an early stage, the concerning effects of Vioxx.
I worry about us heading down this road - but perhaps not for the reasons you might think.
Many cynically assume that drug developers (like me) would resist such a surveillance system because it could unexpectedly result in the withdrawal of a marketed drug, costing a company millions in lost sales.
In fact, I suspect most drug developers (like me) would thrill, positively thrill to the prospect of a robust system of post-marketing surveillance that would enable rare but important side-effects to be captured accurately and responded to quickly. No one wants to develop harmful products.
Many hurdles drug developers now face reflect the limited faith some regulators place in such post-marketing surveillance; initiatives such as the Sentinel program reflect the commitment of regulators to get this done right.
Bottom line: a robust surveillance system could make drug development faster, cheaper, and safer - a win for all stakeholders involved.
A robust surveillance system might even have an additional upside - the opportunity to detect the unexpected positive effects of drugs; such observations have played a critical role in the discovery of many important classes of drugs (especially for neuropsychiatric conditions), and it would be powerful to search for such opportunities more systematically, as discussed here.
Nevertheless, I'm concerned by some of the surveillance analytics about which I've been reading; I worry not about the sensitivity of these approaches - their ability to detect a true signal - but rather the specificity - their ability to accurately recognize the lack of a true signal.
I suspect many emerging approaches are so intensively focused on improving (and reporting) their sensitivity ("we could have found Vioxx!") that they may be neglecting to think through the consequences of an approach that has relatively poor specificity, and detects "signals" that aren't there. Similar to the problems faced in the hospital (especially in the ICU) by constantly beeping warnings, an element of alarm fatigue develops, and you may start discounting all warnings - including, unfortunately, the real ones.
By itself, the problem of sensitivity and specificity could probably be solved through appropriate selection of cutoff criteria, and optimizing the analytics. The real problem comes when you start to factor in the effects of social media (an emerging issue in medicine).
As Reddit recently and painfully demonstrated (nicely reported by The Atlantic's Alexis Madrigal), false information can spread on social media like wildfire, reinforcing Iago's awful, prescient observation, "Trifles, light as air/Are to the jealous confirmations strong/As proof of holy writ."
The continued popularity of the anti-vaccine movement (despite the discrediting of Andrew Wakefield - see this recent informative discussion by Phil Plait in Slate), and the remarkable sales of books such as "Natural Cures ‘They' Don't Want You To Know About" (see this NYT takedown of author Kevin Trudeau) emphasize the power of suspicion and rumor to impact perceptions of health and disease.
Next, think about what happens when you layer in the "nocebo effect" - discussed with characteristic grace by the New Yorker's wonderful new science blogger Gareth Cook here. As Cook explains,
"With placebos (‘I will please' in Latin), the mere expectation that treatment will help brings a diminution of symptoms, even if the patient is given a sugar pill. With nocebos (‘I will harm'), dark expectations breed dark realities. In clinical drug trials, people often report the side effects they were warned about, even if they are taking a placebo."
As Cook adds, "The nocebo effect is not confined to clinical trials."
So here's the dilemma - on the one hand, powerful new big data approaches enable smart researchers to identify important faint signals, though they may often mistake noise for signal. On the other hand, powerful social media technologies provide new opportunities for the profound amplification of spurious signals, leading to their mischaracterization.
As we progressively gain our footing here, we'll learn how to sort this out all. Let's hope that important new medicines are not erroneously discarded along the way.