This book was written in 2008. Although that was an eternity ago in digital years, the question this book begs is still pertinent: namely, how will our world be transformed as computing power increases and gets cheaper? To answer this, Carr first turns to the history of electricity, and it’s shift from an energy source generated on demand to a widely-available utility.
Computing is similar to electricity in that once it is freely available it can be used to accomplish a wide variety of tasks. Alternating between historical narrative and speculation, Nicholas Carr proves as adept at recreating the past as he is at envisioning the future.
Quotes and Anecdotes: The 19th Century Ice Trade, Digital Sharecropping, and The Great Unbundling.
Buy on Amazon
Highlights
The commercial and social ramifications of the democratization of electricity would be hard to overstate. Electric light altered the rhythms of life, electric assembly lines redefined industry and work, and electric appliances brought the Industrial Revolution into the home. Cheap and plentiful electricity shaped the world we live in today. It’s a world that didn’t exist a mere hundred years ago, and yet the transformation that has played out over just a few generations has been so great, so complete, that it has become impossible for us to imagine what life was like before electricity began to flow through the sockets in our walls.
If a company wants to tap into the power of technology, it has to purchase the various components required to supply it, install those components at its own site, cobble them together into a working system, and hire a staff of specialists to keep the system running. In the early days of electrification, factories had to build their own generators if they wanted to use the power of electricity—just as today’s companies have set up their own information systems to use the power of computing.
Such fragmentation is wasteful. It imposes large capital investments and heavy fixed costs on firms, and it leads to redundant expenditures and high levels of overcapacity, both in the technology itself and in the labor force operating it. The situation is ideal for the suppliers of the components of the technology—they reap the benefits of overinvestment—but it’s not sustainable. Once it becomes possible to provide the technology centrally, large-scale utility suppliers arise to displace the private providers. It may take decades for companies to abandon their proprietary operations and all the investments they represent. But in the end the savings offered by utilities become too compelling to resist, even for the largest enterprises. The grid wins.
Shawn Fanning’s invention showed the world, for the first time, how the Internet could allow many computers to act as a single shared computer, with thousands or even millions of people having access to the combined contents of previously private databases.
Technology shapes economics, and economics shapes society.
The transformation in the supply of computing promises to have especially sweeping consequences. Software programs already control or mediate not only industry and commerce but entertainment, journalism, education, even politics and national defense. The shock waves produced by a shift in computing technology will thus be intense and far-reaching. We can already see the early effects all around us—in the shift of control over media from institutions to individuals, in people’s growing sense of affiliation with “virtual communities” rather than physical ones, in debates over the security of personal information and the value of privacy, in the export of the jobs of knowledge workers, even in the growing concentration of wealth in a small slice of the population. All these trends either spring from or are propelled by the rise of Internet-based computing.
Many of the characteristics that define American society came into being only in the aftermath of electrification. The rise of the middle class, the expansion of public education, the movement of the population to the suburbs, the shift from an industrial to a service economy—none of these would have happened without the cheap current generated by utilities. Today, we think of these developments as permanent features of our society. But that’s an illusion. They’re the by-products of a particular set of economic trade-offs that reflected, in large measure, the technologies of the time. We may soon come to discover that what we assume to be the enduring foundations of our society are in fact only temporary structures, as easily abandoned as Henry Burden’s wheel.
The waste inherent in client-server computing is onerous for individual companies. But the picture gets worse—much worse—when you look at entire industries. Most of the software and almost all of the hardware that companies use today are essentially the same as the hardware and software their competitors use. Computers, storage systems, networking gear, and most widely used applications have all become commodities from the standpoint of the businesses that buy them. They don’t distinguish one company from the next. The same goes for the employees who staff IT departments. Most perform routine maintenance chores—exactly the same tasks that their counterparts in other companies carry out. The replication of tens of thousands of independent data centers, all using similar hardware, running similar software, and employing similar kinds of workers, has imposed severe penalties on the economy. It has led to the overbuilding of IT assets in almost every sector in industry, dampening the productivity gains that can spring from computer automation.
Why has computing progressed in such a seemingly dysfunctional way? Why has the personalization of computers been accompanied by such complexity and waste? The reason is fairly simple. It comes down to two laws. The first and most famous was formulated in 1965 by the brilliant Intel engineer Gordon Moore. Moore’s Law says that the power of microprocessors doubles every year or two. The second was proposed in the 1990s by Moore’s equally distinguished colleague Andy Grove. Grove’s Law says that telecommunications bandwidth doubles only every century. Grove intended his “law” more as a criticism of what he considered a moribund telephone industry than as a statement of technological fact, but it nevertheless expresses a basic truth: throughout the history of computing, processing power has expanded far more rapidly than the capacity of communication networks. This discrepancy has meant that a company can only reap the benefits of advanced computers if it installs them in its own offices and hooks them into its own local network. As with electricity in the time of direct-current systems, there’s been no practical way to transport computing power efficiently over great distances.
As with many multisyllabic computer terms, “virtualization” is not quite as complicated as it sounds. It refers to the use of software to simulate hardware. As a simple example, think of the way the telephone answering machine has changed over the years. It began as a bulky, stand-alone box that recorded voices as analogue signals on spools of tape. But as computer chips advanced, the answering machine turned into a tiny digital box, often incorporated into a phone. Messages weren’t inscribed on tape but rather stored as strings of binary bits in the device’s memory. Once the machine had become fully digitized, though, it no longer had to be a machine at all. All its functions could be replicated through software code. And that’s exactly what happened. The box disappeared. The physical machine turned into a virtual machine—into pure software running out somewhere on a phone company’s network. Once you had to buy an answering machine. Now you can subscribe to an answering service. That’s the essence of virtualization.
In short, the central supply of cheap electricity altered the economics of everyday life. What had been scarce—the energy needed to power industrial machines, run household appliances, light lights—became abundant. It was as if a great dam had given way, releasing, at long last, the full force of the Industrial Revolution.
Here was the first, but by no means the last, irony of electrification: even as factory jobs came to require less skill, they began to pay higher wages. And that helped set in motion one of the most important social developments of the century: the creation of a vast, prosperous American middle class.
Another development in the labor market also played an important role in the rise of the middle class. As companies expanded, adopted more complicated processes, and sold their goods to larger markets, they had to recruit more managers and supervisors to oversee and coordinate their work. And they had to bring in many other kinds of white-collar workers to keep their books, sell their goods, create marketing and advertising campaigns, design new products, recruit and pay employees, negotiate contracts, type and file documents, and, of course, operate punch-card tabulators and related business machines. As industries such as chemicals manufacturing and steel-making became more technologically advanced, moreover, and steel-making became more technologically advanced, moreover, companies had to hire cadres of scientists and engineers. While the expansion of the white-collar workforce, like the mechanization of factories, began before electrification, cheap power accelerated the trend. And all the new office jobs paid well, at least by historical standards.
The shift in skilled employment away from tradesmen and toward what would come to be known as “knowledge workers” had a knock-on effect that also proved pivotal in reshaping American society: it increased the workforce’s educational requirements. Learning the three Rs in grammar school was no longer enough. Children needed further and more specialized education to prepare them for the new white-collar jobs. That led to what Harvard economist Claudia Goldin has termed “the great transformation of American education,” in which public education was extended from elementary schools to high schools. Secondary education had been rare up through the early years of the century; it was reserved for a tiny elite as a preparatory step before entering university. In 1910, high-school enrollment in even the wealthiest and most industrially advanced regions of the country rarely exceeded 30 percent of 14- to 17-year-olds, and it was often considerably lower than that. But just twenty-five years later, average enrollment rates had jumped to between 70 and 90 percent in most parts of the country. Going to high school, which a generation earlier wouldn’t have entered the minds of most kids, had become a routine stop on the way to a decent job.
Before the arrival of electric appliances, homemaking had been viewed as work—as a series of largely unpleasant but inescapable tasks. If it wasn’t always drudgery, it was something that had to be done, not something that one would have chosen to do. After electrification, homemaking took on a very different character. It came to be seen not as a chore but as a source of identity and, in itself, a means of personal fulfillment. Women saw their status and their worth as being inextricably linked to their success as a homemaker, which in turn hinged on their ability to master domestic machinery.
Utility-supplied electricity was by no means the only factor behind the great changes that swept American business and culture in the first half of the twentieth century. But whether it exerted its influence directly or through a complicated chain of economic and behavioral reactions, the electric grid was the essential, formative technology of the time—the prime mover that set the great transformations in motion. It’s impossible to conceive of modern society taking its current shape—what we now sense to be its natural shape—without the cheap power generated in seemingly unlimited quantities by giant utilities and delivered through a universal network into nearly every factory, office, shop, home, and school in the land. Our society was forged—we were forged—in Samuel Insull’s dynamo.
The web had turned out to be less the new home of Mind than the new home of Business.
As user-generated content continues to be commercialized, it seems likely that the largest threat posed by social production won’t be to big corporations but to individual professionals—to the journalists, editors, photographers, researchers, analysts, librarians, and other information workers who can be replaced by, as Horowitz put it, “people not on the payroll.” Sion Touhig, a distinguished British photojournalist, points to the “glut of images freely or cheaply available on the Web” in arguing that “the Internet ‘economy’ has devastated my sector.” Why pay a professional to do something that an amateur is happy to do for free?
There have always been volunteers, of course, but unpaid workers are now able to replace paid workers on a scale far beyond what’s been possible before. Businesses have even come up with a buzzword for the phenomenon: “crowdsourcing.” By putting the means of production into the hands of the masses but withholding from those masses any ownership over the products of their communal work, the World Wide Computer provides an incredibly efficient mechanism for harvesting the economic value of the labor provided by the very many and concentrating it in the hands of the very few. Chad Hurley and Steve Chen had good reason to thank the “YouTube community” so profusely when announcing the Google buyout. It was the members of that community who had, by donating their time and creativity to the site, made the two founders extremely rich young men.
In the YouTube economy, everyone is free to play, but only a few reap the rewards.
The shift from scarcity to abundance in media means that, when it comes to deciding what to read, watch, and listen to, we have far more choices than our parents or grandparents did. We’re able to indulge our personal tastes as never before, to design and wrap ourselves in our own private cultures.
In the real world, with its mortgages and schools and jobs, the mechanical forces of segregation move slowly. There are brakes on the speed with which we pull up stakes and move to a new house. Internet communities have no such constraints. Making a community-defining decision is as simple as clicking a link. Every time we subscribe to a blog, add a friend to our social network, categorize an email message as spam, or even choose a site from a list of search results, we are making a decision that defines, in some small way, whom we associate with and what information we pay attention to.
In theory, “preferences for broader knowledge, or even randomized information, can also be indulged.” In reality, though, our slight bias in favor of similarity over dissimilarity is difficult, if not impossible to eradicate. It’s part of human nature.
The study revealed a fact about human nature and group dynamics that psychologists have long recognized: the more that people converse or otherwise share information with other people who hold similar views, the more extreme their views become.
The internet turns everything, from news-gathering to community-building, into a series of tiny transactions—expressed mainly through clicks on links—that are simple in isolation yet extraordinarily complicated in the aggregate. Each of us may make hundreds or even thousands of clicks a day, some deliberately, some impulsively, and with each one we are constructing our identity, shaping our influences, and creating our communities. As we spend more time and do more things online, our combined clicks will shape our economy, our culture, and our society.
We’re still a long way from knowing where our clicks will lead us. But it’s clear that two of the hopes most dear to the Internet optimists—that the Web will create a more bountiful culture and that it will promote greater harmony and understanding—should be treated with skepticism. Cultural impoverishment and social fragmentation seem equally likely outcomes.
Technology is amoral, and inventions are routinely deployed in ways their creators neither intend nor sanction. In the early years of electrification, electric-shock transmitters developed by the meatpacking industry to kill livestock were appropriated by police forces and spy agencies as tools for torturing people during interrogations. To hold inventors liable for the misuse of their inventions is to indict progress itself.
We treat the Internet not just as a shopping mall and a library but as a personal diary and even a confessional. Though the sites we visit and the searches we make, we disclose details not only about our jobs, hobbies, families, politics, and health but also about our secrets, fantasies, obsessions, peccadilloes, and even, in the most extreme cases, our crimes. But our sense of anonymity is largely an illusion. Detailed information about everything we do online is routinely gathered, stored in corporate or governmental databases, and connected to our real identities, either explicitly through our user names, our credit card numbers, and the IP addresses automatically assigned to our computers or implicitly through our searching and surfing histories.
Computer systems in general and the Internet in particular put enormous power into the hands of individuals, but they put even greater power into the hands of companies, governments, and other institutions whose business it is to control individuals.
“The Yahoo story,” write Jack Goldsmith and Tim Wu, “encapsulates the Internet’s transformation from a technology that resists territorial law to one that facilitates its enforcement.”
While the Net offers people a new medium for discovering information and voicing opinions, it also provides bureaucrats with a powerful new tool for monitoring speech, identifying dissidents, and dissemination propaganda.
Businesses have also found that the internet, far from weakeing their control over employees, actually strengthens their hand. Corporate influence over the lives and thoughts of workers used to be bounded by both space and time. Outside the walls of a company’s office and outside the temporal confines of the workday, people were largely free from the control of their bosses. But one of the consequences of the Net’s boundary-breaking is that the workplace and the workday have expanded to fill all space and all time. Today, corporate software and data can be accessed from anywhere over the Internet, and email and instant-messaging traffic continues around the clock. In many companies, the de facto assumption is that employees are always at work, whether they’re in the office, at their home, or even on vacation.
Despite the resistance of the Web’s early pioneers and pundits, consumerism long ago replaced libertarianism as the prevailing ideology of the online world.
Google originally resisted the linking of advertisements to search results—its founders argued that “advertising-funded search engines will be inherently biases towards the advertisers and away from the needs of the consumers”—but now it makes billions of dollars through the practice.
Advertising and promotion have always been frustratingly imprecise. As the department store magnate John Wanamaker famously said more than a hundred years ago, “Half the money I spend on advertising is wasted. The trouble is, I don’t know which half.”
History tells us that the most powerful tools for managing the processing and flow of information will be placed in the hands not of ordinary citizens but of businesses and governments. It is their interest—the interest of control—that will ultimately guide the progress and the use of the World Wide Computer.
The Internet, and all the devices connected to it, is not simply a passive machine that responds to our commands. It’s a thinking machine, if as yet a rudimentary one, that actively collects and analyzes our thoughts and desires as we express them through the choices we make while online—what we do, where we go, whom we talk to, what we upload, what we download, which links we click on, which links we ignore. By assembling and storing billions upon billions of tiny bits of intelligence, the Web forms what the writer John Battelle calls “a database of human intentions.”
It will be years before there are any definitive studies of the effect of extensive Internet use on our memories and thought processes.
The medium is not only the message. The medium is the mind. It shapes what we see and how we see it. The printed page, the dominant information medium of the past 500 years, molded our thinking through, to quote Neil Postman, “its emphasis on logic, sequence, history, exposition, objectivity, detachment, and discipline.” The emphasis of the Internet, our new universal medium, is altogether different. It stresses immediacy, simultaneity, contingency, subjectivity, disposability, and, above all, speed. The Net provides no incentive to stop and think deeply about anything, to construct in our memory that “dense repository of knowledge that Foreman cherishes. It’s easier, as Kelly says, “to Google something a second or third time rather than remember it ourselves.”
Cleaner, safer, and even more efficient than the flame it replaced, the lightbulb was welcomed into homes and offices around the world. But along with its many practical benefits, electric light also brought subtle and unexpected changes to the way people lived. The fireplace, the candle, and the oil lamp had always bee the focal points of households. Fire was, as Schivelbusch puts it, “the soul of the house.” Families would in the evening gather in a central room, drawn by the flickering flame, to chat about the day’s events or otherwise pass the time together. Electric light, together with central heat, dissolved that long tradition. Family members began to spend more time in different rooms in the evening, studying or reading or working alone. Each person gained more privacy, and a greater sense of autonomy, but the cohesion of the family weakened.
All technology change is generational change. The full power and consequence of a new technology are unleashed only when those who have grown up with it become adults and begin to push their outdated parents to the margins. As the older generations die, they take with them their knowledge of what was lost when the new technology arrived, and only the sense of what was gained remains. It’s in this way that progress covers its tracks, perpetually refreshing the illusion that where we are is where we were meant to be.
In many cases, apps refine and simplify people’s use of the Internet and allow online services to be more closely tailored to personal preferences. But they also mark a retreat from the Net’s original, chaotic openness. More and more, our online experiences are shaped to fit the commercial interests of the big companies that control the cloud. The price for greater convenience and slicker professionalism is an erosion of personal choice and autonomy. This kind of tradeoff is nothing new. As Columbia law professor Tim Wu explained in his 2010 book The Master Switch, every mass medium has gone down a similar path as it has matured. After an initial period of “revolutionary novelty and youthful utopianism,” when experimentation is rampant and “individual expression” untrammeled, the medium comes to be dominated by powerful corporations, which control “the flow and nature of content” with a view to maximizing their profits. Because the Internet, more than any past medium, has become “the fabrics of their lives,” Wu wrote, any companies that come to wield substantial control over it would also gain unprecedented power over us.