Reviewed by Hal Abelson
The spectacular achievement of the Internet is a success that has many parents. But when it comes to engineering design, a top honor must go to the decision to make the Net "stupid": Let the network perform its limited function of transmitting bits, and leave "smarter" functions, such as encryption, content filtering and quality of service, to be supplied by the computers attached to the network rather than by the network core itself. In other words, let the network do its basic job while staying out of the way of everything else. In 1984, three designers of the Internet communications protocols -- Jerome H. Saltzer, David P. Reed and David D. Clark -- published a paper in ACM Transactions on Computer Systems in which they dub this approach the end-to-end principle. It spawned a communications system of enormous flexibility, one that was able over the course of a quarter-century of mind-boggling innovation and growth to adapt to accommodate numerous new devices and applications.
The end-to-end principle demonstrates the Internet designers' good sense, and their humility, in appreciating that they could not possibly predict in the early 1970s all the things people might want to use the Net for. They chose therefore to restrict those potential uses as little as possible. End-to-end as an engine of innovation has become a watchword in communications policy as well as technology (see, for example, Lawrence Lessig's The Future of Ideas: The Fate of the Commons in a Connected World ). And end-to-end arguments have permeated much of the wrangling over "network neutrality" over the past year as advocates of network neutrality have appeared before Congress and the Federal Communications Commission.
Yet end-to-end has a dark side. After all, if you can use the Net for anything, then you can use it for anything -- including spam, denial-of-service attacks and computer break-ins carried out by spoofing IP addresses or poisoning domain-name server caches, all of which are enabled by the simplicity of the Internet's core architecture. Something the designers didn't anticipate back in the infancy of the Net was the subsequent enormous growth in the number of users, bringing with it bad actors ready to exploit the system's very flexibility in order to undermine it. By 2001, Clark, one of the original end-to-end architects, was worrying with Marjory Blumenthal, then-director of the National Academies' Computer Science and Telecommunications Board, that it might be necessary to backtrack from the end-to-end principle, or at least reinterpret it, in light of the harm that can be done by untrustworthy Internet users. That Blumenthal and Clark's article, which appeared in the first issue of ACM Transactions on Internet Technology, immediately preceded the explosion of new Web 2.0 applications shows that clamping down on end-to-end then would have been premature. But the worries, and the tensions about ease of innovation in a world of bad actors, are greater than ever.
The story of the end-to-end Internet and its discontents has been told before, but never with such insight and never from such a comprehensive technical, legal, policy and social perspective as in Jonathan Zittrain's The Future of the Internet -- And How to Stop It. Zittrain, who recently moved from the Oxford Internet Institute to Harvard Law School, is renowned in Internet policy circles as the most tech-savvy of today's young cyber-legal minds. This book is certain to cement that reputation. I have been fortunate to have had the opportunity to co-teach Internet policy courses with Zittrain at Harvard and MIT, and it is delightful to see his insights packaged up with such lucidity and wit here.
The Future of the Internet is about much more than Internet architecture. The same progression from open innovation to open anxiety has played out with the personal computer. Say what you wish about Microsoft's disdain for open standards; the fact is that anyone can create a Windows PC application, run it, distribute it and sell it with no need to ask permission from anyone at the company's headquarters. That's not true of Apple's iPhone, for which (as of this writing) you can't distribute an application unless it's been approved by Apple, which also reserves the right, and maintains the architectural control, to assassinate iPhone applications even after they've been distributed and are in use. As with the Internet, a world in which anyone can program any PC or mobile application is also a world of viruses and other bugaboos. And whether users will accept the same risk to their phones as they do to their PCs will determine whether the emerging ecology of smartphone applications will be more like the innovation-rich Internet or more like, well, the phone.
In Zittrain's telling, the end-to-end Internet and the open PC platform are examples of generative systems. He defines generativity as "a system's capacity to produce unanticipated change through unfiltered contributions from broad and varied audiences." The first part of the book is an analysis of generative tools, their major characteristics and the ways that generative systems enable their most salient input (participation) and their most salient output (innovation).
Generative systems are subject to the generative pattern: An idea originates and contributions are welcomed from anyone. Success brings more and more usage, including users who don't share the original goals of experimentation and still others who use the system for undesirable ends. Finally, there is "movement toward enclosure to prevent the problems that arise from the system's very popularity."
So it is with the Internet and the personal computer. We're seeing a shift away from generative platforms to what Zittrain calls "tethered appliances": TiVos rather than general-purpose PCs with video recording software, "dumb terminals" attached to network services rather than full-blown computers. The movement is prompted by consumers' desire for simple single-function devices -- and consumers' frustration when they use their machines incorrectly and when virus writers and other malefactors use them wrongly.
Zittrain then opens the law books to argue that prospects may be even more bleak for the generative Net. Closed, tethered platforms are easily subject to regulation and control, whether through surveillance or by means of the outright removal of product features that enable activities that manufacturers or governments find undesirable. The movement toward enclosure, moreover, is self-reinforcing: If a manufacturer gives itself the ability to lock a system down, then a subsequent legal procedure may require it to impose that lockdown. Zittrain cites the 2004 patent infringement case of TiVo v. Echostar, which is still being appealed. After a Texas jury found in favor of TiVo, the court ordered Echostar, today known as DISH Network Corporation, to use its over-the-air software upgrade capability to downgrade its video receivers by disabling the recording capability in systems that customers had already purchased and were using. It's not hard to foresee how tethered appliances and hosted services can become systems where consumers can't count on the continued performance of devices they "own." That's the very opposite of generativity and a death blow to participatory innovation.
But this book is no Cassandra's tale, and Zittrain's aim is not mere hand-wringing. Instead, he's on a profoundly optimistic quest to rescue the generative Internet from the generative pattern's fate -- and he finds the way forward in generativity itself. His inspiration here is Wikipedia, which began as the "crazy" idea of creating a collaborative encyclopedia from Web pages that anyone in the world can edit and has matured into one of the largest Web reference sites, with more than 10 million articles and 75,000 active contributors. Anyone can create or modify a Wikipedia page at any time, or vandalize it. But, equally, anyone can undo the damage, and there's a team of volunteer Wikipedian editors ready to detect and correct introduced errors at a moment's notice. In Zittrain's recent parlance (adopted since the book's publication) this is a "civic technology" that has engendered a "civic defense."
The final part of The Future of the Internet presents an array of legal and technical approaches to stimulating civic defenses for the generative Internet. These range from a liability regime that exposes manufacturers to greater risk when they exert greater control over the use of their products, to Herdict, a decentralized PC monitoring system that lets users share information about PC performance to identify malware, to a suggestion for PCs to host multiple virtual machines that can run with different degrees of openness. The key to these approaches is that for the Internet, generativity can empower communities that have a shared interest in the preservation of the infrastructure and can be mobilized to protect this infrastructure more effectively than can any top-down regulatory scheme.
Will the Internet a decade from now retain its generative soul and continue on its breathtaking path of participatory innovation? Or will we find ourselves in a world of tethered appliances and locked-down phones? At the end of the day Zittrain's message is one of hope rather than reassurance, but it's the best hope we have. This book is a must-read for any student of technology and policy, and its prescriptions are a must-do for the future of innovation in the digital age.
Hal Abelson is Professor of Computer Science and Engineering at MIT and a founding board member of the Free Software Foundation, Creative Commons, and Public Knowledge. He is author, with Ken Ledeen and Harry Lewis, of Blown to Bits: Your Life, Liberty, and Happiness after the Digital Explosion (Addison-Wesley, 2008); a Web site and policy blog associated with the book can be found at bitsbook.com. Abelson is also author, with Gerry Sussman and Julie Sussman, of the computer science textbook Structure and Interpretation of Computer Programs (MIT Press, 1985, second edition 1996).
Books mentioned in this post