We also like to think that the Internet is still widely distributed as Baran envisioned, when in fact it’s perhaps the most centralized communications network ever built. In the beginning, ARPANET did indeed hew closely to that distributed ideal. A 1977 map of the growing network shows at least four redundant transcontinental routes, run over phone lines leased from AT&T, linking up the major computing clusters in Boston, Washington, Silicon Valley, and Los Angeles. Metropolitan loops created redundancy within those regions as well. [19] If the link to your neighbor went down, you could still reach them by sending packets around in the other direction. This approach is still commonly used today.
By 1987, the Pentagon was ready to pull the plug on what it had always considered an experiment. But the research community was hooked, so plans were made to hand over control to the National Science Foundation, which merged the civilian portion of the ARPANET with its own research network, NSFNET, launched a year earlier. In July 1988, NSFNET turned on a new national backbone network that dropped the redundant and distributed grid of ARPANET in favor of a more efficient and economical hub-and-spoke arrangement. [20] Much like the air-transportation network today, consortia of universities pooled their resources to deploy their own regional feeder networks (often with significant NSF funding), which linked up into the backbone at several hubs scattered strategically around the country.
Just seven years later, in April 1995, the National Science Foundation handed over management of the backbone to the private sector. The move would lead to even greater centralization, by designating just four major interconnection points through which bits would flow across the country. Located outside San Francisco, Washington, Philadelphia, and Chicago, these hubs were the center not just of America’s Internet, but the world’s. At the time, an e-mail from Europe to Asia would almost certainly transit through Virginia and California. Since then, things have centralized even more. One of those hubs, in Ashburn, Virginia, is home to what is arguably the world’s largest concentration of data centers, some forty buildings boasting the collective footprint of 22 Walmart Supercenters. [21] Elsewhere, Internet infrastructure has coalesced around preexisting hubs of commerce. Today, you could knock out a handful of buildings in Manhattan where the world’s big network providers connect to each other — 60 Hudson Street, 111 Eighth Avenue, 25 Broadway — and cut off a good chunk of transatlantic Internet capacity. (Fiber isn’t the first technology to link 25 Broadway to Europe. The elegant 1921 edifice served as headquarters and main ticket office for the great ocean-crossing steamships of the Cunard Line until the 1960s.)
...
As we layer ever more fragile networks and single points of failure on top of the Internet’s still-resilient core, major disruptions in service are likely to be common. And with an increasing array of critical economic, social, and government services running over these channels, the risks are compounded.
The greatest cause for concern is our growing dependence on untethered networks, which puts us at the mercy of a fragile last wireless hop between our devices and the tower. Cellular networks have none of the resilience of the Internet. They are the fainting ladies of the network world — when the heat is on, they’re the first to go down and make the biggest fuss as they do so.
Cellular networks fail in all kinds of ugly ways during crises; damage to towers (15 were destroyed around the World Trade Center on 9/11 alone), destruction of the “backhaul” fiber-optic line that links the tower into the grid (many more), and power loss (most towers have just four hours of battery backup). In 2012, flooding caused by Hurricane Sandy cut backhaul to over 2000 cell sites in eight counties in and around New York City and its upstate suburbs (not including New Jersey and Connecticut), and power to nearly 1500 others. [24] Hurricane Katrina downed over a thousand cell towers in Louisiana and Mississippi in August 2005, severely hindering relief efforts because the public phone network was the only common radio system among many responding government agencies. In the areas of Japan north of Tokyo annihilated by the 2011 tsunami, the widespread destruction of mobile-phone towers literally rolled the clock back on history, forcing people to resort to radios, newspapers, and even human messengers to communicate. “When cellphones went down, there was paralysis and panic,” the head of emergency communications in the city of Miyako told the New York Times. [25]...
Disruptions in public cloud-computing infrastructure highlight the vulnerabilities of dependence on network apps. Amazon Web Services, the 800-pound gorilla of public clouds that powers thousands of popular websites, experienced a major disruption in April 2011, lasting three days. According to a detailed report on the incident posted to the company’s website, the outage appears to have been a normal accident, to use Perrow’s term. A botched configuration change in the data center’s internal network, which had been intended to upgrade its capacity, shunted the entire facility’s traffic onto a lower-capacity backup network. Under the severe stress, “a previously unencountered bug” reared its head, preventing operators from restoring the system without risk of data loss. [27] Later, in July 2012, a massive electrical storm cut power to the company’s Ashburn data center, shutting down two of the most popular Internet services — Netflix and Instagram. [28] “Amazon Cloud Hit By Real Cloud,” quipped a PC World headline. [29]
The cloud is far less reliable than most of us realize, and its fallibility may be starting to take a real economic toll. Google, which prides itself on high-quality data-center engineering, suffered a half-dozen outages in 2008 lasting up to 30 hours. [30] Amazon promises its cloud customers 99.5 percent annual uptime, while Google pledges 99.9 percent for its premium apps service. That sounds impressive until you realize that even after years of increasing outages, even in the most blackout-prone region (the Northeast), the much-maligned American electric power industry averages 99.96 percent uptime. [31] Yet even that tiny gap between reality and perfection carries a huge cost. According to Massoud Amin of the University of Minnesota, power outages and power quality disturbances cost the U.S. economy between $80 billion and $188 billion a year. [32] A back-of-the-envelope calculation published by International Working Group on Cloud Computing Resiliency tagged the economic cost of cloud outages between 2007 and mid-2012 at just $70 million (not including the July 2012 Amazon outage). [33] But as more and more of the vital functions of smart cities migrate to a handful of big, vulnerable data centers, this number is sure to swell in coming years.
Cloud-computing outages could turn smart cities into zombies. Biometric authentication, for instance, which senses our unique physical characteristics to identify individuals, will increasingly determine our rights and privileges as we move through the city — granting physical access to buildings and rooms, personalizing environments, and enabling digital services and content. But biometric authentication is a complex task that will demand access to remote data and computation. The keyless entry system at your office might send a scan of your retina to a remote data center to match against your personnel record before admitting you. Continuous authentication, a technique that uses always-on biometrics — your appearance, gestures, or typing style — will constantly verify your identity, potentially eliminating the need for passwords. [34] Such systems will rely heavily on cloud computing, and will break down when it does. It’s one thing for your e-mail to go down for a few hours, but it’s another thing when everyone in your neighborhood gets locked out of their homes....
Smart cities are almost guaranteed to be chock full of bugs, from smart toilets and faucets that won’t operate to public screens sporting Microsoft’s ominous Blue Screen of Death. But even when their code is clean, the innards of smart cities will be so complex that so-called normal accidents will be inevitable. The only questions will be when smart cities fail, and how much damage they cause when they crash. Layered atop the fragile power grid, already prone to overload during crises and open to sabotage, the communications networks that patch the smart city together are as brittle an infrastructure as we’ve ever had.
Comments