Few aims are accorded greater nobility than transparency; we admire those who espouse it, and are instinctively contemptuous of those who would deny it. "Sunshine is the great disinfectant," we are continuously reminded.
But as Lawrence Lessig wrote in an eloquent New Republic essay, "Against Transparency," in 2009, "we should also recognize that the collateral consequence of [a] good need not itself be good."
Indeed. Behold the unfolding consequences of transparency initiatives - the groundless demonization of GMO-containing foods, the reflexive critique of anyone who accepts money from drug companies (disclosure: I work at one). It makes you wonder how an implicitly virtuous impulse could lead to such dubious results.
The answer, as I see it, lies in the unreasonable influence of anchoring effects - the power of specific, incomplete, and often irrelevant information to disproportionately shape our thinking.
The phenomenon of anchoring is perhaps best explained by noted flaneur Nassim Taleb in a document he provided Congress in the context of his 2011 expert testimony there:
Numerous experiments provide evidence that professionals are significantly influenced by numbers that they know to be irrelevant to their decision, like writing down the last 4 digits of one's social security number before making a numerical estimate of potential market moves. German judges rolling dice before sentencing showed an increase of 50% in the length of the sentence when the dice show a high number, without being conscious of it.
A key consequence is that some information isn't always better than no information; due to anchoring, exposure to limited or unrepresentative data can have the unintended effect of leading to poor decision-making.
Consider the case of GMO labeling of food - a topic my Forbes colleague Emily Willingham has perhaps covered better than anyone. Advocates insist they are interested in fairness and openness, and point to the unassailable virtue of transparency. Yet, selecting GMOs - of all the millions of potential attributes of food - for labeling represents very narrow, selective disclosure, a sliver of transparency. The choice is inevitably judgmental, and hardly neutral. Calling out a particular attribute suggests there's something about this feature that requires particular scrutiny - in the case of GMOs, a remarkably specious assumption.
I have similar worries about the ongoing transparency crusade involving doctors and drug companies. For starters, calling out financial conflicts rather than the thousands of other factors that might influence decision making seems inherently misleading, and has the effect of suggesting financial conflicts are significantly more important than other potential conflicts that might simply be more difficult to gauge, measure, or document.
I'm especially concerned that the pharma transparency initiatives - ProPublica helpfully calls theirs "Dollars for Docs" - implicitly suggest physicians who receive pharma dollars are corrupt, and on-the-take; this inference is often made explicit by journalists. One unfortunate consequence is the signaling problem that arises, leading some of the most distinguished clinicians and physician-scientists - the doctors who know the most, and who are appropriately in the greatest demand - to be labeled as pharma shills, rather than high-value experts who guide companies towards the development of more impactful medical products.
Are financial incentives important? Of course they are. But physician decision making, like most decision making, is complex and nuanced, and by reducing this to a simple financial calculus simply because we can, we risk obscuring as much as - or perhaps more than - we reveal, creating and reinforcing a comfortable and tidy narrative, an explanatory model that may explain very little, and afford only false reassurance.