System error: How bad analysis poisons tech policy

Shutterstock.com

Article Highlights

  • Tech policy requires sound knowledge of law, economics, engineering, and policy analysis.

    Tweet This

  • Bias and carelessness often lead to bad tech policy analysis.

    Tweet This

Subscribe to
TechPolicyDaily.com
Regular updates on technology policy issues, from TechPolicyDaily.com, the voice of AEI’s Center for Internet, Communications, and Technology Policy. Published daily.

First Name:
Last Name:
Email:
Zip Code:

Key points in this Outlook:

  • Sound analysis of technology policy requires knowledge of law, economics, engineering, and policy development—a combination too often lacking in current analysis.
  • A lack of knowledge about any of these four areas—or bias or carelessness regarding them—is leading to conclusions that are incomplete, unclear, or misleading, exaggerating the risk of unintended consequences.
  • Thoughtful, thorough, and consistent analysis is especially necessary when considering complex issues surrounding network service provider restrictions and subsidies.


Technology policy is especially difficult because it combines four complex disciplines: law, economics, engineering, and policy analysis. Very few people have comprehensive backgrounds in all four fields, so they tend to rely on the judgments of people with stronger grounding. But policy advocates often misstate facts in their own areas of expertise, either intentionally or as a result of subconscious bias.

Four recent examples stand out:

  • A recent engineering analysis of the failed Australian National Broadband Network (NBN) in IEEE Spectrum incorrectly judged the network capacity needs of Internet applications and the capabilities of new and emerging technologies.
  • Economic studies by the New America Foundation’s Open Technology Institute (OTI) and the Consumer Federation of America of US and European triple-play packages (combining Internet, cable TV, and telephone service) made dubious connections between broadband fees, content prices, and service quality.
  • Legal arguments in favor of net neutrality have consistently failed to provide a logically consistent justification for the belief that regulations drafted for the monopoly telephone network have any immediate application to the complex Internet ecosystem as a matter of law or policy. 
  • A policy analysis by New York Times technology policy reporter Edward Wyatt incorrectly asserted that an array of studies supported a judgment about trending service quality when only one study included vital trend line data, and that study contradicted the supposed trend.


If we are going to restrict free-market network service providers in some areas and subsidize them in others, our analysis needs to be thoughtful, thorough, and consistent. Let’s see how these instances of bad analysis went wrong.

Bad Engineering Analysis

“The Rise and Fall of Australia’s $44 Billion Broadband Project,” by Rodney S. Tucker, one of the architects of the NBN, displays the dubious arithmetic the Australian Labour Party used to sell a white-elephant project to the people.[1] The government commissioned an economic study from Deloitte on the benefits of the NBN to Australian households, touting an expected $3,800 (USD) average annual benefit by 2020, but it failed to account for the project’s costs of $5,800 per household or the benefits that privately funded net-works might produce on their own over the next seven years. A one-sided claim about benefits cannot logically justify expenditures.

In addition to this bad economic analysis, Tucker repeats two common myths about the nature of fiber-optic cables: that they are “future proof” (able to meet any possible future need) and are uniquely capable of serving the needs of video-streaming applications. In fact, fiber is certainly not “future proof” if networks of the future will be primarily mobile, as it now seems. Similarly, video-streaming applications consume network capacity on the order of 2–4 megabits/second per program today, with expected doubling in the next 5 to 10 years as higher-definition standards come online. On the other hand, the increasing shift from large living room displays to handheld smartphones and tablets reduces the bandwidth of video streams, so the overall network load will probably grow modestly.

Consequently, Tucker’s analysis is wrong on two clear points of engineering: it overstates capacity requirements and understates the ability of free-market firms to meet actual requirements by investment and advances in technology.

This analysis is not new: I gave a keynote address at the final Australian Telecom Users’ Group (ATUG) Annual Meeting in 2011 in which I made these points:

  • Current copper/fiber hybrid networks have potential capacity of 1–10 gigabits/second as soon as such capacity is needed.
  • Spending money now for an all-fiber network presumes that bandwidth-hungry applications are already here.
  • We have had gigabit networks on corporate and college campuses for nearly 15 years and have seen only smatterings of apps that need more than 100 megabits but less than 1 gigabit.
  • Why spend money today to support apps that may (or may not) emerge tomorrow when the cost of networking equipment is bound to be lower in the future?


This was not the message the NBN’s fans (ATUG was the NBN’s chief grassroots lobbyist until its demise later in 2011) wanted to hear, but I was not the only one delivering it. Before I spoke, Labour’s telecom minister, Hon. Stephen Conroy, faced off with the Liberal Party (Australia’s free marketers) telecom shadow minister, Hon. Malcolm Turnbull, on this very point. Even then, the Liberals were arguing for a more prudent, incremental approach. The NBN’s other advocates were the network equipment manufacturers, who correctly saw the project as an opportunity to make additional sales.

Tucker’s analysis of the tradeoffs between FTTN (fiber to the neighborhood or node) versus FTTH (fiber to the home) is shockingly inconsistent:

An FTTN layout would be a bad idea. Using VDSL, a home connection could theoretically deliver 50 Mb/s, but only if the node sat very close to the house—a mere 100 meters or so away. Since the panel disbanded, a newer standard, VDSL2, has emerged. When combined with a novel interference-reduction technique called
vectoring, it can provide download speeds up to about 100 Mb/s over short distances. And now an even faster standard known as G.fast is in the works, which promises download rates up to 1 Gb/s, but again, only for very short connections. For customers on longer loops, telecoms would be able to guarantee only about 50 Mb/s.[2]

It should be clear that technical advances in DSL, the slower of the two common copper-wire-based broadband technologies, are moving much faster than increases in demand for bandwidth. Newly developed DSL technology upgrades will permit 50 times more capacity over short distances than current systems support, more than satisfying anticipated increases in customer usage over the next decade.

Beyond DSL, the DOCSIS system employed on cable systems currently deployed in Australia and elsewhere is already running at more than 5 gigabits/second, if we count video content; this could easily increase to 10 gigabits in the next two to five years. The standard that will permit 10 gigabit cable, known as DOCSIS 3.1, is already here.

The key variable for DSL is the distance between the neighborhood node and the home, which is completely under the control of the network operator. The key variable for cable is the allocation of bandwidth between Internet access and television; this is also under operator control. Under any reasonable analysis, FTTN and cable can keep up with actual demand for at least the next 10 years, at which time FTTH connections will be cheaper than they are today and mobile connections will be faster and more pervasive.

Tucker certainly knows these facts, but like so many people who have embraced an essentially irrational ideal, he cannot question his previous ideas. The giveaway here is the claim that it is possible to develop future-proof technologies. This claim depends on knowledge of the future that no one has.

Bad Economic Analysis

OTI’s The Cost of Connectivity 2013 report and the Consumer Federation of America’s supporting argument, Comparing Apples to Apples: How Competitive Provider Services Outpace the Baby Bell Duopoly, engage in some amazing economic gymnastics in pursuit of their common goal, an NBN-like public network infrastructure.[3] Their objective is apparently to “prove” that something is wrong with America’s broadband policy system. The OTI report claims: “The new data underscores the extent to which U.S. cities lag behind cities around the world, further emphasizing the need for policy reform. Rather than allowing American cities to fall behind, policymakers should reassess current policy approaches and implement strategies to increase competition, in turn fostering faster speeds and more affordable access.”

Overall, the United States has improved in overall average Transmission Control Protocol (TCP) connection speeds over the past four years: in 2009, the US was ranked 22nd by Akamai Technologies on this measure, but it now stands in 8th place.[4] Therefore, the data do not support the conclusion that the US is “falling behind” other countries. Similarly, the fact that the average Internet speed in any particular US city is higher or lower than that in a noncomparable foreign city tells us nothing about the reasons why this may be the case, let alone whether the difference is meaningful.

OTI employs a very traditional method, cherry picking. Comparing speeds and prices in selected US cities against selected foreign cities can be used to “prove” nearly anything; we simply have to select a collection of cities that make our point and omit those that do not. This method falls apart only if we compare a large number of cities or if we compare cities that are in some way “comparable.”

OTI is careful not to use comparable cities. In the US, it chose an odd collection of small towns and big cities:

    1.     Bristol, VA
    2.     Chattanooga, TN
    3.     Kansas City, KS and MO
    4.     Lafayette, LA
    5.     Los Angeles, CA
    6.     New York, NY
    7.     San Francisco, CA
    8.     Washington, DC

The foreign comparisons are a bit more coherent, consisting solely of the national capitals and major cities with the fastest and newest networks:

    1.    Amsterdam, Netherlands
    2.    Berlin, Germany
    3.    Bucharest, Romania
    4.    Copenhagen, Denmark
    5.    Dublin, Ireland   
    6.    Hong Kong, China
    7.    London, United Kingdom
    8.    Mexico City, Mexico
    9.    Paris, France
    10.    Prague, Czech Republic
    11.    Riga, Latvia
    12.    Seoul, South Korea
    13.    Tokyo, Japan
    14.    Toronto, Canada
    15.    Zurich, Switzerland

OTI does not explain what motivated its choice of cities, but some patterns are evident. There are 30 members of the Organization for Economic Cooperation and Development (OECD) and roughly 200 nations in the world, so it had a lot of choices. The international comparisons tend to be among the densely populated national capitals with the fastest networks, generally those where government programs have installed fiber. The US cities are half small cities like Lafayette, Louisiana, and half large cities like New York and Los Angeles. The small cities have fiber as a result of government subsidies (de facto subsidies, in the case of Kansas City) and large cities have only the networks that the market has produced on its own.

There really are no US cities comparable to Hong Kong, Seoul, Tokyo, and Amsterdam in terms of subsidies and population densities, so we are left trying to judge the impact of policy between Seoul (the most densely populated city in the OECD) and Bristol, Virginia, an isolated area far outside the technology mainstream. As others (such as the Phoenix Center) have observed, these are not exactly intuitive comparisons.[5]

If we were looking for the effects of policy on broadband quality and value, the honest method would involve comparing regions that are similar in every important respect except policy. That is, we would compare regions in which population density, living costs, and incomes are reasonably similar, and we would also control for existing infrastructure at some relevant point of divergence in the past. In my February 2012 study of international broadband for the Information Technology and Innovation Foundation, The Whole Picture: Where America’s Broadband Networks Really Stand, I examined cable TV and telephone network properties at the dawn of the broadband era.[6] Countries that had high cable TV build-out in 2000 have had very little reason to launch FTTH network projects, for example, because DOCSIS meets consumer needs and expectations quite comfortably.

Nations that came into the broadband era with no cable and little telephone coverage, such as former Soviet satellites Romania, Latvia, and the Czech Republic, have naturally provided broadband service over new fiber-optic cable systems. This is not the result of a conscious “policy choice” where other policy choices were equally possible; it is simply an accident of history whose only policy connection reflects the USSR’s indifference to telecommunications as an imperial priority.

So nations that entered the broadband era with little or no high-quality infrastructure have been forced to install fiber, and fiber permits arbitrarily high data rates. One point of comparison: the lasers and sensors that transmit and receive bits on fiber networks (transceivers) are currently cheaper in the 1 gigabit/second variation than in the 100 megabit/second configuration. So fiber networks are driven by economics to run at extremely high speeds internally.

Do people whose computers are connected to networks with high internal speeds automatically use the Internet more often and more effectively? Research tends to say no to this question, as the nations with the highest levels of Internet use do not tend to be those with the fastest networks. The United States is 8th in average connection speed per Akamai, but second in volume of data usage per US Telecom.[7]

OTI’s method—selecting international cities with the newest, fastest, most subsidized networks and ranking utility on the basis of dollars per bit per second—distorts both utility and cost. Is it reasonable to compare the cost of a subsidized network like high-density Hong Kong’s fiber to an organic US network without including the cost of the subsidy itself? This does not pass the laugh test. Similarly, OTI compares “triple-play” packages including voice and video services without considering the cost of television programming in the United States. This glaring omission renders price comparisons meaningless.

So the only lesson to learn from the OTI report is that carefully selected data can be enlisted to “prove” practically anything. Oddly, OTI’s cherry-picked data do not actually prove its conclusion that high levels of competition lead to lower prices, as the slower and more costly networks are located in areas with the highest levels
of competition.[8]

Mark Cooper of the Consumer Federation of America comes to OTI’s rescue with some additional data of his own, but his analysis suffers from the same defects: we care how many people are using networks in a given policy locale, how heavily they are using them and whether they are able to access leading-edge applications. Doing well in these terms is very important, while being connected to a network with lots of unused capacity is much less important. So it is not meaningful to rank networks in terms of dollars per bit per second of speed because there is no linear relationship between speed and utility.

A 1,000 megabit/second network is not 10 times more useful or more valuable than a 100 megabit/second network. It is clearly more useful because some interactions are marginally faster, but not 10 times more useful. In fact, a 20 megabit/second network is sufficient for most households to use practically all the Internet has to offer (as long as the speed is consistent and reliable). The need for gigabit speeds is not apparent in today’s world except for connections that service hundreds of people or support complex interactions with advertising networks while Web pages are loading.

Cooper and OTI seem to recognize this when they analyze wireless services in the United States and Europe. They tout lower prices for data in the European Union thanks to price controls but forget to tell us that speeds in the United States are substantially higher than those in Europe, thanks to our more advanced 4G technology. (The European Union is stuck on 3G, for the most part.) If speed is all-important, why does it not factor into Cooper’s and OTI’s analyses of mobile broadband? A little consistency goes a long way.

So OTI’s and Cooper’s analyses combine bad data with a bad utility (in the sense of “usefulness”) model. The flaw is evident in Cooper’s claim that Sprint and T-Mobile “offer much more attractive service than services offered by Baby Bell wireless broadband providers” Verizon and AT&T.[9] If this is the case, why don’t they have more customers, more revenue, and more earnings? Either the consumer is not very bright, or Cooper’s model of attractiveness is deeply flawed. The latter explanation is the correct one, because the most popular networks are those that offer the best service.

So the giveaway is an economic model utterly inconsistent with market behavior. As philosopher and scientist Alfred Korzybski famously remarked, “The map is not the territory.” Economic models that do not accurately predict market behavior are simply wrong.

Bad Legal Analysis

“Net neutrality” is the idea that Internet service providers should offer generic service to the public and should not sell acceleration services to Internet firms that sell content and services to consumers. It is useful to study the original arguments for net neutrality now that the FCC’s net neutrality rules have been vacated by the DC Circuit Court. The seminal texts to understand are Tim Wu’s article “Network Neutrality, Broadband Discrimination” and an FCC filing by Wu and his mentor Lawrence Lessig on August 22, 2003.[10]

Wu and Lessig develop an argument for “neutral” network treatment of data in some spheres but not in others. Wu, in particular, recognizes that neutrality is actually a very subtle notion, even in the Internet context.

The Internet protocols, TCP and Internet Protocol (IP), are famously indifferent to application requirements from above TCP/IP (in the protocol stack architecture) and also to network capabilities from below. For example, real-time applications such as voice require a low-delay transport service, which is designed into DSL, DOCSIS, Wi-Fi, and Ethernet, but TCP/IP prevents the application from requesting a particular transport service from the network. TCP/IP is not actually “neutral” in this respect; it is more properly deemed “nonresponsive,” as it adopts the policy that all applications and all networks are one-dimensional and the same.

Wu admits, “The argument for network neutrality must be understood as a concrete expression of a system of belief about innovation, one that has gained significant popularity over last two decades.”[11]

So one question that has to be asked is whether there is any concrete evidence that this belief system is based on reality. The best Wu can offer is an elaboration of net neutrality theory: “A communications network like the Internet can be seen as a platform for a competition among application developers. Email, the web, and streaming applications are in a battle for the attention and interest of end-users. It is therefore important that the platform be neutral to ensure the competition remains meritocratic.”[12]

Clearly, this elaboration is a tautology that simply restates the belief that a “neutral” network facilitates competition; it does not explain how this belief can possibly be true or even what is meant by the word “neutral.” Wu admits ”the concept of network neutrality is not as simple as some IP partisans have suggested. Neutrality, as a concept, is finicky, and depends entirely on what set of subjects you choose to be neutral among.”[13]

In today’s world, IP indifference is inherently not neutral:

The technical reason IP favors data applications is that it lacks any universal mechanism to offer a quality of service (QoS) guarantee. It doesn’t insist that data arrive at any time or place. Instead, IP generally adopts a “best-effort” approach: it says, deliver the packets as fast as you can, which over a typical end-to-end connection may range from a basic 56K connection at the ends, to the precisely timed gigabits of bandwidth available on backbone SONET links. IP doesn’t care: it runs over everything. But as a consequence, it implicitly disfavors applications that do care.[14]

The Internet’s history suggests that problems are not solved until they become critical. The Internet does not have an inherited QoS mechanism because early applications did not require one, just as it did not have a congestion management mechanism until ARPANET was bypassed in the mid-1980s and the Internet suffered “congestion collapse.”[15]

The Internet is now in a position where the migration of smartphones to IP requires a general-purpose QoS mechanism to support telephone calling over IP. The technical barrier can be overcome, but the insistence on a “neutrality” mandate (that is more properly an indifference mandate than a true neutrality rule with respect to the broader universe of network applications) has now become a barrier. It comes as no surprise that the firms that support the indifference mandate are those whose businesses run on Web servers, a classically “data intensive” application.

Wu’s paper makes a strong case that “open access” is not a productive measure for today’s Internet and seeks to replace it with net neutrality. But he develops a rationale for net neutrality that runs counter to his own evidence. He stipulates that an indifferent TCP/IP protocol is nonneutral with respect to networks but insists that it can be viewed as neutral with respect to applications. This distinction is not logically consistent because the different service grades designed into networks exist for the purpose of supporting different applications. Wu has something else in mind: he posits that implementing network-wide QoS requires unacceptable changes in the business relationship between application designers and network operators:

True application neutrality may, in fact, sometimes require a close vertical relationship between a broadband operator and Internet service provider. The reason is that the operator is ultimately the gatekeeper of quality of service for a given user, because only the broadband operator is in a position to offer service guarantees that extend to the end-user’s computer (or network). Delivering the full possible range of applications either requires an impracticable upgrade of the entire network, or some tolerance of close vertical relationships.
This point indicts a strict open-access requirement. To the extent open access regulation prevents broadband operators from architectural cooperation with ISPs for the purpose of providing QoS dependent applications, it could hurt the cause of network neutrality.[16]

In effect, Wu’s network neutrality regime gives broadband service providers a triple-play monopoly by preventing the sale of QoS at the gateway between the broadband network and the Internet as a whole. He argues, “Hence, the general principle can be stated as follows: absent evidence of harm to the local net-work or the interests of other users, broadband carriers should not discriminate in how they treat traffic on their broadband network on the basis of inter-network criteria.”[17]

Broadband service providers “discriminate” within their own networks to provision QoS to sell TV and voice services alongside Internet access, but Wu wants to ban the sale of such “discrimination” to external service providers. (If an external service provider cannot ensure timely delivery to users with a QoS guarantee, its services cannot be fully competitive; if access to QoS is not bound to a fee, then all applications will seek QoS whether they need it or not, and it becomes useless.) Effectively, Wu’s regime grants a triple-play monopoly to broadband providers to ensure nondiscriminatory access to the Internet. This is the Kingsbury Commitment—a trade by the Justice Department that granted AT&T a telephone monopoly in return for a universal service commitment—in different clothes.

Kingsbury turned out to be bad law because it prevented the uptake of new technologies in communication networks. Net neutrality is bad law for the same reason. It seeks to preserve the Internet status quo from the 1980s and 1990s long after the rationale for that status quo has ceased to exist. We now have the capability to build networks that can support diverse applications without imposing monopoly conditions and without hobbling technical advances. Only the law stands in the way of this progress.

Wu’s analysis ultimately falls back on telecom law developed to regulate monopoly networks. His key criteria—discrimination, private benefits without public harm, and foreign attachments—hail from telephone law, as he admits: “Its origins are found in the Hush-a-Phone case, where the FCC ordered Bell to allow telephone customers to attach devices that ‘[do] not injure . . . the public in its use of [Bell’s] services, or impair the operation of the telephone system.’”[18]

The giveaway here is the attempt to envelop the Internet in a body of law devised for technology and marketplace realities of a bygone era. Broadband networks and the broadband marketplace exhibit utterly different dynamics that policymakers must recognize if they are to avoid strangling these networks in the crib. Law has a built-in bias toward precedent, but we have to be aware of unprecedented realities when we find them.

Bad Policy Outcome Analysis

On December 29, 2013, the New York Times ran an article by technology policy writer Edward Wyatt that was meant to provide a comprehensive view of the international broadband networking picture and where the US stands in it.[19] This was not just a daily news report or a quick reaction to a new study; Wyatt researched the subject for several weeks, interviewing leading policy figures and reading the reports and studies the interview subjects recommended.

The purpose of such a review is to establish which of the many policy frameworks employed around the world is most effective. When done correctly, retrospective analysis identifies groups of comparable nations that have adopted different policy frameworks and then determines which have seen the most positive outcomes. Retrospective analysis is meant to show what works and what does not.

Wyatt claimed the US is not doing well: “The United States, the country that invented the Internet, is falling dangerously behind in offering high-speed, affordable broadband service to businesses and consumers, according to technology experts and an array of recent studies.”[20]

This is a very specific claim that depends on two factual foundations and one theoretical one: for this to be true, it is necessary to show a trend line drawn from international rankings with a downward slope, and also to produce a number of sources—it is unclear how many studies constitute an “array,” but it must be more than two or three. It is also necessary to show that the measured outcome is meaningful; in this case, that would require a showing that having the absolutely fastest network produces a strong improvement in economic growth and innovation.

It is quite simple to check the factual validity of this claim, as we need merely to read Wyatt’s sources and extract their trend line data. In a tweet, Wyatt revealed his sources as including “Ookla. Akamai, World Econ Forum, OECD, Open Technology Institute.”[21]

But Wyatt’s sources do not actually support his claim:

  • Ookla is a speed-testing service that provides an unscientific measurement of the connection speed of those users who choose to use it and averages the speeds of user tests across cities and nations. It does not provide a database of past performance measurements, so even if Ookla’s current data were representative and not simply an artifact of self-selection (people do not use Ookla unless they have a connection problem), these data do not support historical claims.
  • The World Economic Forum (WEF) has never published a study on average international broadband speeds. The WEF “Delivering on Digital Infrastructure” initiative promises to publish a report of some kind in May 2014. At present, WEF evaluates the US broadband market favorably: “The US market is defined by a ‘more for more’ approach. Consumers pay more for connectivity, but also consume more services; operators generally invest more and have healthier operations. While this environment has inspired adequate investment, there is an opportunity to support more innovation from existing players and new entrants. Policy-makers should encourage competition and innovation at a local/municipal level and eliminate barriers to innovative models.”[22]
  • The Open Technology Institute’s analysis, examined earlier in this paper, does not provide data that would support the claim.
  • OECD collects data on the advertised speeds of available broadband services, and provides historical data. These data do not tell us two crucial things, however: how many people subscribe to broadband services at various advertised speed levels and how fast these services actually are. Other studies have shown that most nations do not deliver the speeds they promise. The European Commission admits this: “Europeans consumers are not getting the broadband download speeds they pay for. On average, they receive only 74% of the advertised headline speed they have paid for, according to a new European Commission study on fixed broadband performance.”[23] The OECD data cannot support the claim that the US is “falling dangerously behind” other countries.
  • This leaves us with the Akamai State of the Internet data sets mentioned in the analysis of the OTI report.[24] Akamai has published speed measurements taken in nearly 200 nations and territories each since 2008. Presently these measurements consist of an “Average Peak Connection Speed” index that tells us roughly how much capacity each broadband connection has; an “Average Connection Speed” index that tells us how heavily each broadband connection is shared; and a “High Speed Broadband” index that tells us what proportion of the broadband connections in a region support the most demanding applications. The Akamai data do not support Wyatt’s claim either.


The most meaningful and consistent of Akamai’s indices is Average Connection Speed, as it indicates the speed at which web pages load. Since it has been tracked, the US has occupied positions from 22nd to 8th (figure 1). The trend line does not suggest the US is falling behind, as the current ranking is as high as it has ever been. The overall trend line for this metric is sharply upward.

Since 2010, Akamai has also had a Peak Connection Speed index that seeks to measure the gross capacity of broadband connection. This index is created by ignoring all measurements on a given IP address except the fastest one and averaging the peak measurements across the region. This is a smaller sample than Average Connection Speed, and it predictably shows much more variation from quarter to quarter (figure 2). The US has ranked as low as 16th and as high as 7th on this scale, with the current ranking in 16th place for the second time since 2011. The overall trend line for this measurement is gently downward.

The final index is the oldest one. Since its inception, the State of the Internet report has endeavored to interpret the penetration of high-speed broadband connections in each region. This indicates how many users subscribe to the fastest and most up-to-date network technologies. It effectively determines three very important factors in one simple index because it measures infrastructure, consumer choice, and utilization.

A nation or region cannot achieve a high score on this index unless it has fast networks, consumers are choosing high-speed service options, and networks are lightly shared. If speed is available but the price is too high, a nation’s score will be low. The US has always done quite well on the High Speed Broadband Connection index, generally scoring 10th, plus or minus 3 (figure 3). There has been no significant deviation in US scores on this index since 2008.

Consequently, there is no empirical basis for claiming the US is falling dangerously behind other nations in broadband speed. It is also unsound to evaluate the innovation readiness of a nation strictly on the basis of its broadband speed or even broadband price. The important factors relate to the diffusion of relatively advanced broadband throughout the population, the willingness of consumers to choose high-speed plans, and the ability of citizens to use network-based services effectively.

High speed is a consequence of installing the latest generation of technology. Any relatively backward nation—such as Wyatt’s example, Latvia—that has installed broadband for the first time will automatically jump to a high rank in average speed because it will install the latest technology. But it will remain behind in effective utilization of network services because it lacks experience. So efforts to rank nations on the average speed of their most recent network buildouts are deceptive: nations win prizes not for being number one in speed, but for being fast enough to enable innovation.

Oddly, Wyatt’s analysis did not mention wireless networks, despite their prominent position as innovation enablers. The US was the first nation to install the latest 4G/LTE wireless networks at scale. "The latest wireless networks are nearly as fast as common wired networks, so the fact that the US has the world’s best LTE coverage is important." The omission of wireless data and the misuse of the measurement data raise serious questions about the objectivity of the New York Times analysis.

Conclusion

Good technology policy depends on the competent application of engineering, economics, and law to technology markets and on realistic assessment of policy consequences. Policy can go astray when errors of analysis are made on any of these fronts, so policymakers must be well informed across the board and must not privilege any one form of analysis over the others. It is not necessary for every policymaker to be skilled in the arts of engineering, economics, law and policy analysis, but it is helpful for them to possess the ability to recognize bad analysis where it exists.

Advocates of overly aggressive government involvement in network markets often clothe their arguments in the language of law, economics, engineering, and known outcomes, implying that their conclusions are nothing more than academic findings based on dispassionate, intellectually rigorous investigation. While poor analysis is not the monopoly of any school of thought, the instances I have examined all happen to come from groups seeking more aggressive government involvement. Government involvement in technology markets is not necessarily toxic; in some instances, such as investment in long-term research, it is clearly beneficial. But aggressive government involvement in markets that are functioning well and producing the desired results—such as world leadership in Internet services, mobile networks, and smartphone applications and continuing price and performance improvements in wired networks—is dubious.

Law, economics, engineering, and retrospective policy analysis are all necessary to inform good Internet policy; bad analysis can just as easily lead to toxic outcomes, slowing progress and harming consumers. It is difficult to believe that the authors of consistently and egregiously bad analysis can honestly believe their “little white lies” are helping citizens, but the prevalence of unclear thinking on one side of the policy debates suggests this could be the case.

Notes

1. Rodney S. Tucker, “The Rise and Fall of Australia’s $44 Billion Broadband Project,” IEEE Spectrum, November 26, 2013, www.spectrum.ieee.org/telecom/internet/the-rise-and-fall-of-australias-44-billion-broadband-project.
2. Ibid.
3. Hibah Hussain et al., The Cost of Connectivity 2013 (Washington, DC: New America Foundation, October 2012), www.newamerica.net/publications/policy/the_cost_of_connectivity_2013; and Mark Cooper, Comparing Apples to Apples: How Competitive Provider Services Outpace the Baby Bell Duopoly (Washington, DC: Consumer Federation of America, November 21, 2013), www.consumerfed.org/pdfs/comparing-apples-to-apples-11-2013.pdf.
4. Akamai, State of the Internet, 4th Quarter 2009 and 2nd Quarter 2013, www.akamai.com/stateoftheinternet/.
5. George Ford, “New America Foundation Misinterprets International Data (Round Three) . . .,” Phoenix Center @lawandeconomics Blog, November 1, 2013, www.phoenix-center.org/blog/archives/1647.
6. Richard Bennett, Luke Stewart, and Robert Arkinson, The Whole Picture: Where America’s Broadband Networks Really Stand (Washington, DC: Information Technology and Innovation Foundation, February 12, 2013), www.itif.org
/publications/whole-picture-where-america-s-broadband-networks-really-stand
.
7. Patrick Brogan, “U.S. Gaining in World Internet Usage,” US Telecom blog, September 26, 2013,  www.ustelecom.org /blog/us-gaining-world-internet-usage.
8. Ford, “New America Foundation Misinterprets International Data (Round Three) . . .”
9. Cooper, Comparing Apples to Apples, 2.
10. Tim Wu, “Network Neutrality, Broadband Discrimination,” Journal of Telecommunications and High Technology Law 2 (2003): 141; and Lawrence Lessig and Tim Wu, “Ex Parte
Submission in CS Docket No. 02-52,” August 22, 2003, www.timwu.org/wu_lessig_fcc.pdf.
11. Wu, “Network Neutrality, Broadband Discrimination,” 145.
12. Ibid., 146.
13. Ibid., 149.
14. Ibid., 149.
15. Van Jacobson and Michael J. Karels, “Congestion Avoidance and Control,” ACM SIGCOMM Computer Communication Review 18, no. 4 (August 1988): 314–29, http://dl.acm.org/citation.cfm?id=52356.
16. Wu, “Network Neutrality, Broadband Discrimination,” 150.
17. Ibid., 171.
18. Ibid., 172.
19. Edward Wyatt, “U.S. Struggles to Keep Pace in Delivering Broadband Service,” New York Times, December 30, 2013, www.nytimes.com/2013/12/30/technology/us-struggling-to-keep-pace-in-broadband-service.html.
20. Ibid.
21. Edward Wyatt (@wyattnyt), tweet, December 30, 2013, https://twitter.com/wyattnyt/status/41784240122 3659520.
22. World Economic Forum, “Delivering on Digital Infrastructure,” www.weforum.org/issues/delivering-digital-infrastructure.
23. European Commission, “Broadband in Europe: Consumers Are Not Getting the Internet Speeds They Are
Paying For,” press release, June 26, 2013, http://europa.eu/rapid /press-release_IP-13-609_en.htm.
24. Akamai, State of the Internet

Also Visit
AEIdeas Blog The American Magazine
About the Author

 

Richard
Bennett

What's new on AEI

Rebuilding American defense: A speech by Governor Bobby Jindal
image Smelling liberal, thinking conservative
image Stopping Ebola before it turns into a pandemic
image All too many reasons for pessimism about Europe
AEI on Facebook
Events Calendar
  • 20
    MON
  • 21
    TUE
  • 22
    WED
  • 23
    THU
  • 24
    FRI
Monday, October 20, 2014 | 2:00 p.m. – 3:30 p.m.
Warfare beneath the waves: The undersea domain in Asia

We welcome you to join us for a panel discussion of the undersea military competition occurring in Asia and what it means for the United States and its allies.

Event Registration is Closed
Tuesday, October 21, 2014 | 8:30 a.m. – 10:00 a.m.
AEI Election Watch 2014: What will happen and why it matters

AEI’s Election Watch is back! Please join us for two sessions of the longest-running election program in Washington, DC. 

Event Registration is Closed
Wednesday, October 22, 2014 | 1:00 p.m. – 2:30 p.m.
What now for the Common Core?

We welcome you to join us at AEI for a discussion of what’s next for the Common Core.

Thursday, October 23, 2014 | 10:00 a.m. – 11:00 a.m.
Brazil’s presidential election: Real challenges, real choices

Please join AEI for a discussion examining each candidate’s platform and prospects for victory and the impact that a possible shift toward free-market policies in Brazil might have on South America as a whole.

No events scheduled this day.
No events scheduled this day.
No events scheduled this day.
No events scheduled this day.