Discussion: (0 comments)
There are no comments available.
View related content: Foreign and Defense Policy
On the otherwise quiet Friday afternoon of April 11, Bloomberg News reporter Michael Riley released an explosive story, asserting that the National Security Agency discovered a flaw in a widely used Internet security protocol two years ago but sat on the information in order to exploit it for intelligence gathering purposes. The article concluded “Putting the Heartbleed bug in its arsenal, the NSA was able to obtain passwords and other basic data that are the building blocks of the sophisticated hacking operations at the core of its mission, but at a cost. Millions of ordinary users were left vulnerable to attack from other nations’ intelligence arms and criminal hackers.” Riley based his account on the word of “two people familiar with the matter.”
Given the gravity of the charge, the initial, halting reaction of the Obama White House and the NSA was puzzling. Clearly caught off guard, the administration first responded with the standard “no comment” on matters related to the NSA. Within hours, however, both the NSA and the White House issued precedent-setting statements (the first time the security apparatus has ever commented on a specific charge), flatly denying that the NSA had known of the flaw since 2012. National Security Council spokeswoman, Caitlin Hayden, stated: “Reports that the NSA or any other part of the government were aware of the so-called Heartbleed vulnerability before April 2014 are wrong.” And the Director of National Intelligence, James Clapper, stated: “NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report.” (This statement reflects the fact that the flaw was independently uncovered by Google and a security firm, Codenomicon, who had begun warning other companies in the days before the Bloomberg story).
What have we learned after two weeks of extensive commentary and analysis from technical and political experts, as well as a predictably passionate and (sometimes) thoughtful response in the blogosphere? First, regarding the dangers to the security of the Internet: it is real but thus far there has not been large scale damage – so far as is known – to either private, commercial activity or government traffic. The Heartbleed bug is actually a flaw in the design of OpenSSL, an encryption tool that is utilized by as many as two-thirds of all active websites (though many consumer sites are not vulnerable because they employ specialized encryption software). It originated in an accidental mistake introduced by a volunteer German programmer two years ago that was not caught by reviewers. Despite conjecture in the blogosphere (and a history of just such activity) there is no evidence that the NSA had anything to do with the introduction of Heartbleed.
Public acknowledgement of the bug has produced a huge scramble to produce protective patches. As noted, Google had independently discovered the flaw, and in the ensuing days and weeks large Internet and technology firms such as Cisco, IBM, Intel, Jupiter Networks, Facebook, Yahoo, Amazon, as well as banks and financial institutions have moved to introduce fixes that will block potential hackers. The U.S. Department of Homeland Security warned that hackers are attempting to exploit the bug in targeted networks, and the Canadian Revenue Agency reported that Heartbleed has been used to steal data on some 900 Canadian citizens. Though large companies have moved quickly, the concern is that many small users will not act to protect themselves – and indeed, once the bug is exploited it will be virtually impossible to detect. In addition, recent tests by security firms have demonstrated that the Heartbleed bug can be utilized to capture private encryption keys that open unfettered access to data flowing between website servers and users’ computers – though again evidence of actual exploitation has not surfaced.
This two part series will address – but alas not definitively answer – two major questions: did the NSA actually not know of Heartbleed and what are the policy implications that flow from the evolving tale of Heartbleed? Part two will address the second question.
Though it has not been widely reported, despite strong denials by the Obama administration, Bloomberg is sticking by its original story and conclusions (more on this below). Not unexpectedly, opinion from outside observers, in print and on the web, is deeply divided. For many, the NSA’s history of obfuscation and alleged deception is dispositive – Intelligence Director James Clapper’s “lying to Congress” in his Senate testimony that the agency did not spy on American citizens is repeatedly cited by sceptics. Along this line, though with less paranoia, other commentators, including the Washington Post technology blog, point to the pressure on the NSA to deny outright:
[If the Bloomberg report is true,] it is difficult to imagine any justification that will even begin to soothe the shock and outrage among people and businesses, both American and non-American who take computer security seriously. If it turns out that the vulnerability has been exploited either by criminals or (more likely) by non-U.S. intelligence agencies, the outrage will be even greater.
Adding to the suspicion is the fact that revelations from Edward Snowden had presented documents showing that as early as 2010 an NSA program, BULLRUN was targeting the SSL protocol. Many security experts find it hard to believe that, given the depth of such searches, the agency could have missed the error introduced in 2012.
Still, there are security experts – some generally critical of the NSA – who point to the emphatic, no-wiggle-room White House and NSA denials, and hold that the administration would not risk lying out of fear of public backlash if it were caught. Others, including a respected CSIS cybersecurity expert, James Lewis, firmly believe that the NSA would not have held back knowledge of Heartbleed. Lewis told the New York Times that such an NSA response would have been “weird,” knowing the risk to the Internet (Lewis is also almost alone in holding that the entire episode has been overdramatized by the press and some in the security community: “a long line of ridiculous stories about cybersecurity.”).
All of this leads us back to the original Bloomberg story and Bloomberg’s subsequent defense of its essential accuracy. In response to the administration’s emphatic denials, Bloomberg went back to its sources and also dug deeper into the possibilities of alternative explanations of what transpired between Heartbleed and the NSA. In a little-noticed follow up article, Michael Riley reaffirmed his original conclusions but also noted that the agency had “more than one way to circumvent the security of SSL and OpenSSL.” Sources pointed out that a potential work-around could involve not exploiting the SSL software directly but breaking into a different system in the targeted computer on which the software depends. And outside cybersecurity expert, Jason Syverson, conjectured: “Maybe it’s not Heartbleed, maybe it’s what they call alpha green, and alpha green is something that sends a packet to Open SSL and creates and information leak.” NSA could have considered this a hacking technique rather than an SSL software vulnerability, which it had the responsibility to reveal. In this scenario, the White House and NSA denials could be technically correct, and still allow the NSA to reap the security benefits of a workaround that did not utilize Heartbleed in the first instance.
Needless to say, the NSA had no comment on the follow up Bloomberg story, so we are left with messy conjecture. But unless Heartbleed results in large scale commercial hacking and damage to the Internet in the future (or Edward Snowden has another surprise up his sleeve), the White House and the agency may have dodged a bullet. This does not mean, however, that the difficult policy challenges created by the NSA’s capability and policy of breaking into Internet security protocols are settled or going away. These challenges will be taken up in Part 2 of this posting.
Though the potentially large disruption of Internet traffic has not as yet materialized, the revelation of the Heartbleed flaw has “prompted a full roar in the world of internet security,” in the words of Washington Post media blogger, Eric Wemple. According to Bloomberg, the original Michael Riley piece, alleging that the NSA had jeopardized the internet security of thousands of sites, pulled in some 378,000 hits on the Internet, and nearly 6,000 on Bloomberg’s much-utilized data terminals.
Whatever the potential technical perils for Internet security, the Heartbleed episode has already produced significant policy repercussions. Importantly, it has forced the Obama administration to reveal details of its internal cybersecurity decision making hitherto kept out of sight. It has highlighted – though certainly not resolved – the difficult dilemma of balancing the intelligence imperative of keeping America safe against the commitment to protect the openness and security of the Internet. And finally, the Obama administration’s decision to pull many final Heartbleed-like judgments into the White House raises serious questions about its actual ability to control the vast sweep and scope of such operations.
Though never publicly acknowledged, it has long been known that the NSA has utilized software vulnerabilities for espionage and even sabotage purposes. They are often labeled “zero day” flaws, meaning the computer user or system has “zero days” to fix or “patch” them before hackers can exploit the software flaw. For example, Stuxnet, the program employed by the U.S. and Israel to cripple much of Iran’s uranium enrichment project, used four different zero day vulnerabilities. These operations are run by the NSA’s Tailored Access Operations unit, which traditionally has enjoyed great leeway and independence to develop, store and exploit over time software vulnerabilities not only in computers, but also in industrial controllers, anti-virus software, heating and cooling systems, encryption protocols, and video conferencing systems.
In December 2013, President Obama’s intelligence review board recommended curtailment, or at least much stricter regulation, of the use of zero days vulnerabilities, except for “high priority intelligence.” President Obama publicly ignored these recommendations when he announced the administration’s response in January. After a brief, intense internal debate within the administration, the president issued a directive governing zero day flaw, but kept the review process under wraps.
The Heartbleed revelations and an escalating backlash upended the administration’s plan to preserve secrecy and maximum flexibility regarding policies and procedures for handling software vulnerabilities. As noted previously, the White House’s initial response was ad hoc and piecemeal – an immediate denial of prior knowledge of the Heartbleed flaw, accompanied by assurances without details that the process was “biased toward responsibly disclosing such vulnerabilities.”
Then came an odd turn of events. On April 28, the administration issued a major policy announcement regarding zero day policies in the form of a highly personalized blog post from Michael Daniel, the White House top cybersecurity adviser. Daniel described the principles and processes undergirding how to respond to or utilize vulnerabilities in the Internet but did so in personal narrative and dictates: “we reinvigorated our efforts,” and “here are a few things I want to know” when deciding how to handle vulnerabilities. It is impossible to know just what legal standing this document enjoys.
Substantively, Daniel stated: “Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.” But he stopped short of committing to immediate publication in all cases, noting that “disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence.”
Careful to avoid substantial opposition from civil libertarians and even some skeptics in the corporate world, Daniel spent much of the posting describing a “disciplined, rigorous and high-level decision-making process for vulnerability disclosures.” The interagency process will be run by the National Security Council (probably Daniel himself), and final decisions (the “hard calls”) will no longer be left to the NSA or FBI.
Finally, Daniel set forth a list of nine questions that would be weighed in balancing the pros and cons of disclosure, including: How much is the vulnerability used in the core Internet infrastructure or in the US. economic or in national security systems; does the vulnerability, if left unpatched impose significant risk (Heartbleed); how badly do we need the intelligence; and are there other ways to get it?
The administration’s actions and pledges have received a mixed reaction. Even libertarian/privacy advocates agree that the policy announcement represents an important step toward greater transparency and clarity of purpose. Still, Daniel had admitted that despite the stricter procedures, there are “no hard and fast rules,” leading a spokesman for the ACLU to assert that the “policy has a (security) loophole so big you could drive a truck through it.” And earlier, Ed Black, head of the trade group that represents the major Internet companies argued that: “Broad exceptions for national security and law enforcement are too likely to be so wide as to effectively swallow the rule.”
Black’s point was strongly reinforced by a new investigate article from Bloomberg written by Riley and Jordan Robinson. In exhaustive analysis (not challenged by the White House) of the immense and wide-ranging NSA commitment to programs to expose software flaws, Riley and Robinson write: In fostering a new “cybermilitary industrial complex,” the U.S.
has poured billions of dollars into an electronic arsenal built with so-called zero-day exploits…that can make anything that runs on a computer chip vulnerable to hackers. The agency’s stockpile of exploits runs into the thousands, aimed at every conceivable device, and many are not disclosed even to units within the agency responsible for defending U.S. government networks.
The article gives a number of examples of specific programs overseen by large and small contractors, from Northrup Grumman and Lockheed to SI Government solutions and ForAllSecure. The NSA plans to spend over $26 billion on cybersecurity programs over the next five years.
In balancing the implication of new interest group profit motivation, the two authors also clearly lay out the larger security mission:
It’s hard to imagine the U.S.’s increasingly sophisticated cyberspying and cyberwar operations without its deep arsenal of software exploits…Like giving up sophisticated missiles and bombers, giving up an arsenal of highly valuable computer exploits could leave the country more vulnerable in a future national security crisis.
The NSA and its allies have asserted that restricting the search for software flaws would amount to unilateral disarmament against America’s foes.
What lessons can be gleaned from the policy debate at this point? First, in light of the breadth and depth of NSA cybersecurity operations, the administration was at minimum disingenuous in stating that “building up a huge stockpile” of software vulnerabilities is not in the “national interest.” By all accounts, the “huge stockpile” is already in place, and we have no intention of dismantling it or curtailing future investments. As the Robinson – Riley piece demonstrates, the (large) cat is out of the bag; and the administration would be better served by owning up to the size and scope of these NSA programs and defending them in a public discourse. It could start by giving more formal status to its new policies regarding software vulnerabilities – and not leave the matter in the limbo of a blog posting on social media.
Further, given the multiplicity of existing Heartbleed-like operations and the prospect for ever-increasing capabilities in the future, it is questionable whether the decision to centralize final decisions in the White House is viable or workable. It is likely that either the real decisions will remain down in the security agencies, or otherwise, the cybersecurity staff of the National Security Council will be swamped with technologically complicated and politically sensitive decisions for which they did not have the knowledge or support to answer with dispatch and precision rapidly – as is often demanded. Congress should also assert itself here, given the importance and implications of the proposed new policies and procedures for both the Internet economy and U.S. cybersecurity.
Finally, the recent events and backlash have raised a larger issue: that lodging both offensive and defensive mandates within a single agency (NSA) is inherently unworkable. A detailed posting on Wired.com noted:
The NSA’s offense-oriented operations in the digital realm would also seem to directly oppose the agency’s own mission in the defensive realm. While the NSA’s Tailored Access Operations division is busy using zero days to hack into systems, the spy agency’s Information Assurance Division is supposed to secure military and national security systems, which are vulnerable to the same kinds of attacks the NSA conducts against foreign systems.
This has led to renewed calls to separate the Pentagon’s Cyber Command from NSA’s surveillance operations – a move that is strongly opposed by the military and current security leadership. They argue that it would destroy the synergies developed by the combined assets and inevitably lead to a weakened cybersecurity capability. In December, the president turned down such a proposal, even though the advisory board recommended the split – and even with recent events, the administration is unlikely to change its mind. Once again, this is an area where Congress needs to be brought in.
In summary, then, the Obama administration’s newly announced policies represent only the beginning of a process to rebalance national security and openness/privacy goals in a world where leaks and exposure are increasingly a part of everyday life – and where cybersecurity foes are achieving ever greater sophistication and technological expertise.
There are no comments available.
1150 17th Street, N.W. Washington, D.C. 20036
© 2016 American Enterprise Institute for Public Policy Research