Internet Governance, Technical Standards and the “Tree” Antennas

Diego Vicentin is one of the eight 2014 Milton Wolf Emerging Scholar Fellows, an accomplished group of doctoral and advanced MA candidates selected to attend the 2014 Milton Wolf Seminar“The Third Man Theme Revisited: Foreign Policies of the Internet in a time of Surveillance and Disclosure.” Their posts highlight the critical themes and on-going debates raised during the 2014 Seminar discussions.

Just as soon as I arrived from Brazil in the US, to join the Center for Information Technology Policy Princeton team as graduate fellow, the curious image of cellular antennas disguised as trees caught my attention. It is common to see these “tree” antennas right beside the road while travelling from Princeton to New York, Philly, or Boston. Aside from the purpose of avoiding visual pollution, this attempt at producing a friendlier landscape is representative of our relation with information and communication infrastructure. The technical and political apparatus that supports the mode of operation of digital technologies is predominantly invisible to the end user. Only the ones who really pay attention can see the antenna behind the fake tree branches. Normally that is not the case. The majority of users take infrastructure for granted, rendering it invisible. Whereas such invisibility might be seen as an unintended result of both the technical complexity of digital communication networks and its decentralized form of governance, in fact, it is commonly used as a power strategy to avoid accountability as well as broader political participation in technology governance.

Fortunately, Edward Snowden’s revelations have shed light on the issue of technology governance, bringing technology into the mainstream of contemporary political discussion. We must understand and discuss the strong connections between politics and technology in order to transform invisibility into transparency. The process of shaping information and communication infrastructure and operations should be transparent to the extent that individuals can see the dynamics, the political dilemmas, and the embedded power relations.

It was in the best spirit of shedding light on the connection between policy and technology that the discussions took place at the 2014 Milton Wolf Seminar at the Diplomatic Academy of Vienna. The Seminar focused specifically on internet governance and its current and future implications for international relations, national sovereignty, privacy, and surveillance. The discussions, however, were not limited to these topics as other themes arose, including the one I am particularly interested in: the standards development process.

Although this theme was not the focus of any presentation, the word “standard” was mentioned countless times during the seminar because one must inevitably speak about technical standards when discussing issues of internet governance. Digital networks rely on technical standards in order to assure interoperability and work as true means of communication. Developing technical standards is making governance by design—or in other words, making technology governance de facto. It is a way of partially defining the terms through which the exchange of information will occur. These are the points I will stress in this blog post while trying to connect my research interest on standards development with the central topics discussed at the 2014 Milton Wolf Seminar.

Privacy, security and surveillance.

The December 2013 UN adoption of the resolution “Right to Privacy in the Digital Age” was an important step towards protecting the right to privacy in cyberspace. It is widely known that Brazil and Germany drafted the text as a political response to the revelations of NSA spying activities in these countries, which included surveillance of Brazilian President Dilma Rousseff and German Chancellor Angela Merkel.  The international community expressed wide support for the drafting and proposing of the resolution, considering it an important measure, even though, in practice, it does not stop or even diminish surveillance. At the opening of NETmundial internet governance meeting in São Paulo, President Rousseff indicated in her speech that the UN resolution was only the very first step towards privacy protection. As a second step, I would suggest (to Brazil, Germany and whoever else is concerned) that they start paying attention to the development and adoption of crypto standards.

It is fairly known today that the NSA has undermined crypto standards in order to monitor the secure communications of internet users. To put in the exact words of the NSA’s memo which was leaked by Mr. Snowden and partially published by The New York Times: “For the past decade, N.S.A. has led an aggressive, multipronged effort to break widely used Internet encryption technologies…Vast amounts of encrypted Internet data which have up till now been discarded are now exploitable.” Because of the NSA’s activities, sensitive data like medical records, online banking, and private e-mail communication becomes more vulnerable. But, how is the NSA breaking these encryptions?

According to Snowden’s revelations, the NSA’s decryption program (named Bullrun) includes simultaneous strategies that include coercing and/or paying private companies who produce security software to include backdoors in their own products. I am referring to the case of RSA’s BSafe software, which included a flawed algorithm as a default. Subsequently, the algorithm in question (named as Dual_EC_DRBG) was included also in crypto standards approved by the National Institute of Standards and Technology (NSIT) and by ISO/IEC as an option among other existing algorithms for core techniques in crypto applications. This means that the NSA succeeded in inserting a flawed technique into widely used and recognized security standards. This fact leads us to another question: how were they able to influence the standardization process in that way?

Since only the NSA has the key to exploiting the flaw produced by the Dual_EC_DRBG algorithm, we might adopt a naïve view and assume that before the Snowden leaks the backdoor was only a mere supposition raised by two crypto experts in 2007; and that the algorithm was technically defensible and therefore could be included in crypto standards. It turns out, however, that technical excellence is not the only one or even the most important part of the standards creation process.

To influence the development of a standard, one specific organization (like the NSA or any other) must have accumulated technical expertise as well as the ability to manage the bureaucratic structure of a given standards development organization (SDO). They must understand the internal policies and procedures of the SDO in question and use them to guide the process toward the desired outcome. Dominating the language, the proper specific vocabulary that every SDO seems to have, is also essential. An engineer very familiar with standards development once told me that “every word matters” in the standards drafting process. Making alliances with other stakeholders and negotiating appropriately with those who have conflicting interests is also critical. Of course, such a task becomes easier if people from your organization occupy leading positions in the SDO structure, such as chairman of the working group responsible for drafting the standard.

These are only the basic requirements necessary for really taking part in the game of building ICT standards. There are several active SDOs developing standards in the ICT arena, all of which vary in their structure and practices. Just to give a few examples, SDOs differ regarding their technical scope, geographical area of influence, openness, and membership and participation rules. If a determined company or organization wants to exert some level of influence in the whole standardization ecosystem it must be present in all relevant SDOs. In other words, the ability to influence the standards development process is directly related to the organization’s ability to participate in different standardization bodies. This provides an indication of how costly it is to participate in standards development and potentially explains why the NSA had to spend so much money on its decryption program ($250 million-a-year).

To clarify my point, let us examine an example of security standard that might have been suffered from NSA influence. The IPSEC is a standard that provides data authentication, integrity, and confidentiality as it transfers between communication points across IP networks. To put it differently, IPSEC is a fundamental crypto tool that should provide protection to individual data packets while they are being transferred. It is standardized by the Internet Engineering Task Force (IETF), a widely respected “technical community” responsible for drafting internet protocols. The work is mainly done via online mailing lists and face-to-face interactions within different working groups (WGs) dedicated to key technical areas. Participation is open to everyone; and contributions can be made in face-to-face meetings as well as online. Mailing list interactions are published and open to public scrutiny. I tend to agree with Laura DeNardis: regarding governance and procedures, “IETF is an open standards-setting organization, with processes that adhere to democratic principles of transparency and participation (2014:70).”

But the IPSEC case indicates that this openness and transparency might not be enough. Right in the middle of the Bullrun program revelations, a respected free software activist, John Gilmore, published comments about his experience as an IPSEC WG contributor, stressing that the NSA had undermined the IPSEC standards by making it “incredibly complicated.” He highlighted the participation of the NSA employees in the working group as well as their leadership roles, which included serving as the document editors. Gilmore went on to note that, “every once in a while, someone not an NSA employee, but who had longstanding ties to NSA, would make a suggestion that reduced privacy or security, but which seemed to make sense when viewed by people who didn’t know much about crypto.” Gilmore’s observations seem to explain why the highest goals of IPSEC were not achieved. As professor Ed Felten wrote at Freedom to Tinker: “a successful and widely deployed IPSEC would have been a game-changer for Internet security, putting lots of traffic under cryptographic protection” but “indeed, IPSEC has not had anything like the impact one might have expected.”

So we may conclude that the NSA succeeded in undermining a standard with real potential for increasing online security and privacy protection. In contrast to what happened with BSafe, the strategy the NSA adopted in the IPSEC case was to create problems in the development process in order to produce a weak and complicated standard. From my perspective, what the NSA is doing is policy by design. It is interfering with the process of shaping crypto standards in order to ensure the continuity and the feasibility of their mass surveillance policy. It is a perverse policy because it undermines online security, internet freedom, and the human right to privacy.

Governments and civil society organizations committed to the defense of online freedom, privacy, and security should participate in the “technical communities” and help to shape standards that protect these values. More than that, they must help to shape the ways that these communities operate in order to make the process of constructing standards more inclusive, open, and transparent. Internet governance depends on the technopolitical nature of standards. The main argument here is not new; it goes back at least to Lessig’s “Code is Law.” But it must be remembered and reinforced, against enduring misconceptions that the technical operation of the internet is a separate issue from its political, social and commercial dimensions. This is the blindness that does not allow one to see the antenna behind the fake tree branches.

//

Diego Vicentin is a Ph.D. candidate in Sociology at the University of Campinas (Brazil) and research fellow (2013-2014) in the Center for Information Technology Policy at Princeton University (USA). Diego has been studying the development history and the operation mode of cell phone networks by exploring its technical and political aspects. Currently he is investigating the intersection between politics and technology specifically with regard to decision-making related to the technical standards of mobile broadband networks. Twitter: @diego_jv

Featured Photo Credit: Diego Vicentin

Leave a Reply