“The Glass Cage” by Nicholas Carr

Because designers often assume that human beings are “unreliable and inefficient,” at least when compared to a computer, they strive to give them as small a role as possible in the operation of systems. People end up functioning as mere monitors, passive watchers of screens. That’s a job that humans, with our notoriously wandering minds, are particularly bad at. Research on vigilance, dating back to studies of British radar operators watching for German submarines during World War II, shows that even highly motivated people can’t keep their attention focused on a display of relatively stable information for more than about half an hour. They get bored; they daydream; their concentration drifts. “This means,” Bainbridge wrote, “that is is humanly impossible to carry out the basic function of monitoring for unlikely abnormalities.”

Automation asks so little of us. By it’s very definition, it runs itself, and that’s the problem. In some industries more than others, technology has gotten so advanced that it’s reduced its employees to passive monitors instead of engaged participants. That’s a role humans are uniquely bad at being. Our collective nature means we thrive on active participation. We are defined by the work we do.

Nicholas Carr isn’t a disenfranchised Luddite. Rather, he’s a deep thinker on the possibilities of technology and where it’s taking us. After all, as he says, technology is “what makes us human. Technology is in our nature. Through our tools we give our dreams form. We bring them into the world. The practicality of technology may distinguish it from art, but both spring from a similar, distinctly human yearning.” But he does have serious misgivings about letting the machines work for us (telling us how to navigate, remembering facts, etc.) while we sit idly. And those misgivings make for an engaging, if harrowing, read.

Quotes and Anecdotes: The Downside of GPS
Buy on Amazon

Highlights

This is a book about automation, about the use of computers and software to do things we used to do ourselves. It’s not about the technology or the economics of automation, nor is it about the future of robots and cyborgs and gagetry, though all those things enter into the story. It’s about automation’s human consequences.

We’re inclined to desire things we don’t like and to like things we don’t desire. “When the things we want to happen do not improve our happiness, and when the things we want not to happen do,” the cognitive psychologists Daniel Gilbert and Timothy Wilson have observed, “it seems fair to say we have wanted badly.” And as slews of gloomy studies show, we’re forever wanting badly.

Plenty of jobs are dull and even demeaning, and plenty of hobbies and pastimes are stimulating and fufilling. But a job imposes a structure on our time that we lose when we’re left to our own devices. At work, we’re pushed to engage in the kinds of activities that human beings find most satisfying. We’re happiest when we’re absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work, to use Csikszentmihalyi’s term, that we tune out distractions and transcend the anxieties and worries that lague our everyday lives. Our usually wayward attention becomes fixed on what we’re doing.

in the workplace, automation’s focus on enhancing speed and efficiency—a focus determined by he profit motive rather than by any particular concern for people’s well-being—often has the effect of removing complexity from jobs, diminishing the challenge they present and hence the engagement they promote.

Tht may be the most important lesson to be gleaned from Wiener’s work—and, for that matter, from the long, tumultuous history of labor-saving technology. Technology changes, and it changes more quickly than human beings change. Where computers sprint forward at the pace of Moore’s law, our own innate abilities creep ahead with the tortouse-like tread of Darwin’s law. In the United States and other Western countries, fatal airliner crashes have become exceedingly rare. Of the more than seven billin people who boarded U.S. commercial flights in the ten years from 2002 through 2011, only 153 ended up dying in a wreck, a rate of two deaths for every hundred million passengers. In the ten years from 1962 through 1971, by contrast, 1.3 billion people took flights, and 1,696 of them died, for a rate of 133 deaths per hundred million.

By isolating us from negative feedback, automation makes it harder for us to stay alert and engaged. We tune out even more.

Automation tends to turn us from actors into observers. Instead of manipulating the yoke, we watch the screen. That shift may make our lives easier, but it can also inhibit our ability to learn and to develop expertise. Whether automation enhances or degrades our performance in a given task, over the long run it may diminish our existing skills or prevent us from acquiring new ones.

When we work hard at something, when we make it the focus of attention and effort, our mind rewards us with greater understanding. We remember more and we learn more. In time, we gain know-how, a particular talent for acting fluidly, expertly, and purposefully in the world. That’s hardly a surprise. Most of us know that the only way to get good at something is by actually doing it. It’s easier to gather information quickly from a computer screen—or from a book, for that matter. But true knowledge, particularly the kind that lodges deep in memory and manifests itself in skill, is harder to come by. It requires a vigorous, prolonged struggle with a demanding task.

A series of experiments reported in Science in 2011 indicates that the ready availability of information online weakens our memory for facts.

Without lots of practice, lots of repetition and rehearsal of a skill in different circumstances, you and your brain will never hey really good at anything, at least not anything complicated. And without continuing practice, any talen you do achieve will get rusty.

The ability to act with aplomb in the world turns all of us into artists. “The effortless absorption experienced by the practiced artist at work on a difficult project always is premised upon earlier mastery of a complex body of skills.” When automation distances us from our work, when it gets between us and the world, it erases the artistry from our lives.

Simulations are arlso simplifications; they replicate the real world only imperfectly, and their outputs often reflect the biases of their creators.

Falling victim to the substitution myth, the RAND researchers did not sufficiently account for the possibility that electronic records would have ill effects along with beneficial ones—a problem that plagues many forecasts about the consewuences of automation. The overly optimistic analysis led to overly optimistic policy.

“The lesson,” he wrote, “should be increasingly clear—it is not necessarily true that highly complex equipment requires skilled operators. The ‘skill’ can be built into the machine.

But the replication of the outputs of thinking is not thinking. As Turing himself stressed, algorithms will never replace intuition entirely. There will always be a place for “spontaneous judgments which are not the result of conscious trains of reasoning.” What really makes us smart is not our ability to pull facts from documents or decipher statistical patterns in arrays of data. It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge. It;s this supple quality of mind, spanning conscious and unconscious cognition, reason and inspiration, that allows human beings to think conceptually, critically, metaphorically, speculatively, wittily—to take leaps of logic and imagination.

A particular risk with correlation-calculation algorithms stems from their reliance on data about the past to anticipate the future. In most cases, the future behaves as expected; it follows precedent. But on those peculiar occasions when conditions veer from established patterns, the algorithms can make wildly inaccurate predictions— a fact that has already spelled disaster for some highly computerized hedge funds and brokerage firms. For all their gifts, computers still display a frightening lack of common sense.

In his 1947 essay “Rationalism in Politics,” the British philosopher Michael Oakeshott provided a vivid description of the modern rationalist: “His mind has no atmosphere, no changes of season and temperature; his intellectual processes, so far as possible, are insulated from all external influence and go on in the void.” The rationalist has no concern for culture or history; he neither cultivates nor displays a personal perspective. His thinking is notable only for “the rapidity with which he reduces the tangle and variety of experience” into “a formula.” Oakeshott’s words also provide us with a perfect descrition of computer intelligence: eminently practical and productive and entirely lacking in curiosity, imagination, and worldliness.

In the year 2000, the U.S. government lifted many of the restrictions on the civilian use of the global positioning system.

Julia Frankenstein, a German congitive psychologist who studies the mind’s navigational sense, believes it’s likely that “the more we rely on technology to find our way, the less we build up our congnitive maps.” Because computer navigation systems provide only “bare-bones route information, without the spatial context of the whole area,” she explains, our brains don’t receive the raw material required to form rich memories of places. “Developing a cognitive map from this reduced information is a bit like trying to get an entire musical piece from a few notes.”

In fact, O’Keefe and the Mosers, as well as other scientists, have begun to theorize that the “mental travel” of memory is governed by the same brain systems that enable us to get around in the world.

Technology improves, of course, and bugs get fixed. Flawlessness, though, remians an ideal that can never be achieved. Even if a perfect automated system could be designed and built, it would still operate in an imperfect world. Autonomous cars don’t drive the streets of a utopia. Robots don’t ply their trades in Elysian factories. Geese flock. Lightning strikes. The conviction that we can build an entirely self-sufficient, entirely reliable automated ststem is itself a manifestation of automation bias.

Concerns about the effects of computers and other machines on people’s minds and bodies have routinely been trumped by the desire to achieve maximum efficiency, speed, and precision—or simply to turn as big a profit as possible.

By defining the human factor as a peripheral concern, the technologist also removes the main impediment to the fulfillment of his desires; the unbridled pursuit of technological progress becomes self-justifying. To judge technology primarily on its technological merits is to give the gadgeteer carte blanche.

Video games tend to be loathed by people who have never played them. That’s understandable, given the gore involved, but it’s a shame. In addition to their considerable ingenuity and occasional beauty, the best games provide a model for the design of software. They show how applications can encourage the development of skills rather than their atrophy. To master a video game, a player has to struggle through challenges of increasing difficulty, always pushing the limits of his talent. Every mission has a goal, there are rewards for doing well, and the feedback (an eruption of blood, perhaps) is immediate and often visceral. Games promote a state of flow, inspiring players to repeat tricky maneuvers until they become second nature. The skill a gamer learns may be trivial—how to manipulate a plastic controller to drive an imaginary wagon over an imaginary bridge, say—but he’ll learn it thoroughly, and he’ll be able to exercise it agains in the next mission or the next game. He’ll become an expert, and he’ll have a blast along the way.

Even the high-end programs used by musicians, record producers, filmmakers, and photographers place an ever stronger emphasis on ease of use. Complex audio and visual effects, which once demanded expert know-how, can be achieved by pushing a button or dragging a slider. The underlying concepts need not be understood, as they’ve been incorporated into software routines. This has the very real benefit of making the software useful to a broader group of people—those who want to get the effects without the effort. But the cost of accommodating the dilettante is a demeaning of expertise.

It’s true we don’t need to be experts at everything, but as software writers take to scripting processes of intellectual inquiry and social attachment, frictionlessness becomes a problematice ideal. It can sap up not only of know-how is something important and worth cultivating. Think of the algorithms for reviewing and correcting spelling that are built into virually every writing and messaging application these days. Spell checkers once served as tutors. They’d highlight possible errors, calling your attention to them and, in the process, giving you a little spelling lesson. You learned as you used them. Now, the tools incorprate autocorrect functions. They instantly and surreptitiously clean up your mistakes, without alerting you to them. There’s no feedback, no “friction.” You see nothing and learn nothing.

Like meddlesome parents who never let their kids do anything on their own, Google, Facebook, and other makers of personal software end up demeaning and diminishing qualities of character that, at least in the past, have been seen as essential to a full and vigorous life: ingenuity, curiosity, independence, perseverence, daring. It may be that in the future we’ll only experience such virtues vicariously, through the exploits of action figures like John Marston in the fantasy worlds we enter through screens.

Isaac Asimov’s first law of robot ethics—“a robot may not injure a human being, or, through inaction, allow a human being to come ot harm”—sounds reasonable and reassuring, but it assumes a world far simpler than our own.

With a smartphone in hand, we become a little ghostly, wavering between worlds. People have always been distractible, of course. Minds wander. Attention drifts. But we’ve never carried on our person a tool that so insistently captivates our senses and divides our attention.

Social networks push us to present ourselves in ways that conform to the interests and prejudices of the companies that run them. Facebook, through its Timeline and other documentary features, encourages its member to think of their public image as indistinguishable from their identity. It wants to lock them into a single, uniform “self” that persists throughout their lives, unfolding in a coherent narrative beginning in childhood and ending, one presumes, with death. This fits with its founder’s narrow conception of the self and its possibilities. “You have one identity,” Mark Zuckerberg has said. “The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.” He even argues that “having two identities for yourself is an example of a lack of integrity.” That view, not surprisingly, dovetails with Facebook’s desire to package its members as neat and coherent sets of data for advertisers. It has the added benefit, for the company, of making concern about personal privacy seem less valid. If having more than one identity indicates a lack of integrity, then a yearning to keep certain thoughts or activities out of public view suggests a weakness of character. But the conception of selfhood that Facebook imposes through its software can be stifling. The self is rarely fixed. It has a protean quality. It emerges through personal exploration, and it shifts with circumstances. That’s especially true in youth, when a person’s self-conception is fluid, subject to testing, experimentation, and revision. To be locked into an identity, particularly early in one’s life, may foreclose opportunities for personal growth and fufillment.

Technology is as crucial to the work of knowing as it is to the work of production. The human body, in its native, unadorned state, is a feeble thing. It’s constrained in its strength, its dexterity, its sensory range, its calculative prowess, its memory. It quickly reaches the limits of what it can do. But the body encompasses a mind that can imagine, desire, and plan for achievements the body alone can’t fulfull. This tension between what the body can accomplish and what the mind can envision is what gave rise to and continues to propel and shape technology. It’s the spur for humankind’s extension of itself and elaboration of nature. Technology isn’t what makes us “post-human” or “transhuman,” as some writers and scholars have recently suggested. It’s what makes us human. Technology is in our nature. Through our tools we give our dreams form. We bring them into the world. The practicality of technology may distinguish it from art, but both spring from a similar, distinctly human yearning.

It follows that whenever we gain a new talent, we not only change our bodily capacities, we change the world. The ocean extends an invitation to the swimmer that it withholds from the person who has never learned to swim. With every skill we master, the world reshapes itseld to reveal greater possibilities. It becomes more interesting, and being in it becomes more rewarding.

The digital technologies of automation, rather than inviting us into the world and encouraging us to develop new talents that enlarge our perceptions and expand our possibilities, often have quite the opposite effect. They’re designed to be disinviting. They pull us away from the world. That’s a consequence not only of the prevailing technology-centered design practices that place ease and effeciency above all other concerns. It also reflects the fact that, in our personal lives, the computer has become a media device, its software painstakingly programmed to grab and hold our attention. As most people know from experience, the computer screen is intensely compelling, not only for the conveniences it offers but also for the many diversions it provides. There’s always something going on, and we can join in at any moment with the slightest of effort. Yet the screen, for all of its enticements and stimulations, is an environment of sparseness—fast-moving, efficient, clean, but revealing only a shadow of the world.

Ours may be a time of material comfort and technological wonder, but it’s also a time of aimlessness and gloom. During the first decade of this century, the number of Americans taking prescription drugs to treat depression or anxiety rose by nearly a quarter. One in five adults now regularly takes such medications. The suicide rate among middle-aged Americans increased by nearly 30 percent over the same ten years, according to a report from the Centers for Disease Control and Prevention. More than 10 percent of American schoolchildren, and nearly 20 percent of high-school-age boys, have been given a diagnosis of attention deficit hyperactivity disorder, and two-thirds of that group take drugs like Ritalin and Adderall to treat the condition. The reasons for our discontent are many and far from understood. But one of them may be that through the pursuit of a frictionless existence, we’ve succeeded in turning what Merleau-Ponty termed the ground of our lives into a barren place. Drugs that numb the nervous system provide a way to rein in our vital, animal sensorium, to shrink our being to a size that better suits our constricted environs.

If technology embodies our dreams, it also embodies other, less benign qualities in our makeup, such as our will to power and the arrogance and insensitivity that accompany it.

Automation weakens the bond between tool and user not because computer-controlled systems are complex but because they ask so little of us. They hide their workings in secret code. They resist any involvement in the operator beyond the bare minimum. They discourage the development of skillfulness in their use. Automation ends up having an anesthetizing effect. We no longer feel our tools as parts of ourselves.

We choose a tool because it’s new or it’s cool or it’s fast, not because it brings us more fully into the world and expands the ground of our experiences and perceptions. We become mere consumers of technology.

The belief in technology as a benevolent, self-healing, autonoomous force is seductive. It allows us to feel optimistic about the future while relieving us of responsibility for that future. It particularly suits the interests of those who have become extraordinarily wealthy through the labor-saving, profit-concentrating effects of automated systems and the computers that control them. It provides our new plutocrats with a heroic narrative in which they play starring roles: recent job losses may be unfortunate, but they’re a necessary evil on the path to the human race’s eventual emancipation by the computerized slaves that our benevolent enterprises are creating. Peter Thiel, a successful entrepreneur and investor who has become one of Silicon Valley’s most prominent thinkers, grants that “a robotics revolution would basically have the effect of people losing their jobs.” But, he hastens to add, “it would have the benefit of freeing people up to do many other things.” Being freed up sounds a lot more pleasant than being fired.

In a prescient passage in The Human Condition, Hannah Arendt observed that if automation’s utopian promise were actually to pan out, the result would probably fell less like paradise than a like a cruel practical joke. The whole of modern society, she wrote, has been organized as “a laboring society,” where working for pay, and then spending that pay, is the way people define themselves and measure their worth. Most of the “higher and more meaningful activities” revered in the distant past had been pushed to the margin or forgotten, and “only solitary individuals are left who consider what they are doing in terms of work and not in terms of making a living.”

What makes one tool superior to another has nothing to do with how new it is. What matters is how it enlarges us or diminishes us, how it shapes our experience of nature and culture and one another. To cede choices about the texture of our daily lives to a grand abstraction called progress is folly.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s