The luddite McLuhan

March 01, 2009

Marshall McLuhan was such a slyboots. He kills me. He continues to be known, of course, as the enthusiastic prophet of the coming electronic utopia, the guy who slathered intellectual grease on progress's rails. The skeptical, sometimes dystopian, subtext of his work went largely unnoticed when he was alive, and it's even more submerged today.

This weekend I was reading through Understanding Me, a collection of interviews with McLuhan, and I came upon this telling passage from a 1966 TV interview with the journalist Robert Fulford:

Fulford: What kind of world would you rather live in? Is there a period in the past or a possible period in the future you'd rather be in?

McLuhan: No, I'd rather be in any period at all as long as people are going to leave it alone for a while.

Fulford: But they're not going to, are they?

McLuhan: No, and so the only alternative is to understand everything that's going on, and then neutralize it as much as possible, turn off as many buttons as you can, and frustrate them as much as you can. I am resolutely opposed to all innovation, all change, but I am determined to understand what's happening because I don't choose just to sit and let the juggernaut roll over me. Many people seem to think that if you talk about something recent, you're in favor of it. The exact opposite is true in my case. Anything I talk about is almost certain to be something I'm resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.

The Sun interview

February 27, 2009

I have the honor of being the designated interviewee in the March issue of The Sun magazine. The interview, by Arnie Cooper, covers a lot of ground, and it's been posted in its entirety on The Sun's site. Here's a taste:

Cooper: Do you think computers have harmed our relationship with nature?

Carr: I certainly think they’ve gotten in the way of our relationship to nature. As we increasingly connect with the world through computer screens, we’re removing ourselves from direct sensory contact with nature. In other words, we’re learning to substitute symbols of reality for reality itself. I think that’s particularly true for children who’ve grown up surrounded by screens from a young age. You could argue that this isn’t necessarily something new, that it’s just a continuation of what we saw with other electronic media like radio or tv. But I do think it’s an amplification of those trends.

Cooper: What about the interactivity of the Internet? Isn’t it a step above the passivity that television engenders?

Carr: The interactivity of the Net brings a lot of benefits, which is one of the main reasons we spend so much time online. It lets us communicate with one another more efficiently, and it gives us a powerful new means of sharing our opinions, pursuing our interests and hobbies with others, and disseminating our creative works through, for instance, blogs, social networks, YouTube, and photo-publishing sites. Those benefits are real and shouldn’t be denigrated. But I’m wary of drawing sharp distinctions between “active” and “passive” media. Are we really “passive” when we’re immersed in a great novel or a great movie or listening to a great piece of music? I don’t think so. I think we’re deeply engaged, and our intellect is extremely active. When we view or read or listen to something meaningful, when we devote our full attention to it, we broaden and deepen our minds. The danger with interactive media is that they draw us away from quieter and lonelier pursuits. Interactivity is compelling because its rewards are so easy and immediate, but they’re often also superficial.

The free arts and the servile arts

February 22, 2009

I have taken it upon myself to mash up the words of Steve Gillmor, posted yesterday at TechCrunchIT, and the words of Andrew Louth, published in 2003 at the Times Higher Education site:

Gillmor: We’re at the threshold of the realtime moment. The advent of a reasonably realtime message bus over public networks has changed something about the existing infrastructure in ways that are not yet important to a broad section of Internet dwellers. The numbers are adding up — 175 million Facebook users, tens of thousands of instant Twitter followers, constant texting and video chats among the teenage crowd.

The standard attack on realtime is that it is the new crack. We’re all addicted to our devices, to the flow of alerts, messages, and bite-sized information chunks. We no longer have time for blog posts, refreshing our Twitter streams for pointers to what our friends think is important. It’s the revenge of the short attention span brought on by 30-second television ads — the myth of multi-tasking spread across a sea of factoids that Nick Carr fears will destroy scholarship and ultimately thinking. Of course this is true and also completely irrelevant.

Louth: The medieval university was a place that made possible a life of thought, of contemplation. It emerged in the 12th century from the monastic and cathedral schools of the early Middle Ages where the purpose of learning was to allow monks to fulfil their vocation, which fundamentally meant to come to know God. Although knowledge of God might be useful in various ways, it was sought as an end in itself. Such knowledge was called contemplation, a kind of prayerful attention.

The evolution of the university took the pattern of learning that characterised monastic life - reading, meditation, prayer and contemplation - out of the immediate context of the monastery. But it did not fundamentally alter it. At its heart was the search for knowledge for its own sake. It was an exercise of freedom on the part of human beings, and the disciplines involved were to enable one to think freely and creatively. These were the liberal arts, or free arts, as opposed to the servile arts to which a man is bound if he has in mind a limited task.

In other words, in the medieval university, contemplation was knowledge of reality itself, as opposed to that involved in getting things done. It corresponded to a distinction in our understanding of what it is to be human, between reason conceived as puzzling things out and that conceived as receptive of truth. This understanding of learning has a history that goes back to the roots of western culture. Now, this is under serious threat, and with it our notion of civilisation.

Gillmor: My daughter told her mother today that her boyfriend was spending too much time on IM and video-chat, and not enough on getting his homework done. She actually said these words: “I told him you have to get away from the computer sometimes, turn it off, give yourself time to think.” This is the same daughter who will give up anything - makeup, TV, food — just as long as I don’t take her computer or iPhone away.

So realtime is the new crack, and even the naivest of our culture realizes it can eat our brains. But does that mean we will stop moving faster and faster? No. Does that mean we will give up our blackberries when we become president? No. Then what will happen to us?

Louth: Western culture, as we have known it from the time of classical Greece onwards, has always recognised that there is more to human life than a productive, well-run society. If that were not the case, then, as Plato sourly suggests, we might just as well be communities of ants or bees. But there is more than that, a life in which the human mind glimpses something beyond what it can achieve. This kind of human activity needs time in which to be undistracted and open to ideas.

Gillmor: The browser brought us an explosion of Web pages. The struggle became one of time and location; RSS and search to the rescue. The time from idea to publish to consumption approached realtime. The devices then took charge, widening the amount of time to consume the impossible flow. The Blackberry expanded work to all hours. The iPhone blurred the distinction between work and play. Twitter blurred personal and public into a single stream of updates. Facebook blurred real and virtual friendships. That’s where we are now.

Louth: Martin Heidegger made a distinction between the world that we have increasingly shaped to our purposes and the earth that lay behind all this, beyond human fashioning. The world is something we know our way around. But if we lose sight of the realm of the earth, then we have lost touch with reality. It was, for Heidegger, the role of the poet to preserve a sense of the earth, to break down our sense of security arising from familiarity with the world. We might think of contemplation, the dispassionate beholding of reality, in a similar way, preventing us from mistaking the familiar tangle of assumption and custom for reality, a tangle that modern technology and the insistent demands of modern consumerist society can easily bind into a tight web.

Secret agent moth

February 21, 2009

Elsewhere on the robotics front, the U.S. Defense Advanced Research Projects Agency (Darpa) is making good progress towards its goal of turning insects into remote-controlled surveillance and monitoring instruments. Three years ago, Darpa launched its Hybrid Insect Micro-Electro-Mechanical Systems (HI-MEMS) project, with the intent, as described by IEEE Spectrum, of creating "moths or other insects that have electronic controls implanted inside them, allowing them to be controlled by a remote operator. The animal-machine hybrid will transmit data from mounted sensors, which might include low-grade video and microphones for surveillance or gas sensors for natural-disaster reconnaissance. To get to that end point, HI-MEMS is following three separate tracks: growing MEMS-insect hybrids, developing steering electronics for the insects, and finding ways to harvest energy from the them to power the cybernetics."

Papers presented this month at the IEEE International Solid-State Circuits Conference described breakthroughs that promise to help the agency fulfill all three goals. One group of researchers, from the Boyce Thompson Institute for Plant Research, has succeeded in inserting "silicon neural interfaces for gas sensors ... into insects during the pupal phase." Another group, affiliated with MIT, has created a "low-power ultrawide-band radio" and "a digital baseband processor." Both are tiny and light enough to be attached to a cybernetic moth. The group has also developed a "piezoelectric energy-harvesting system that scavenges power from vibrations" as a moth beats its wings. The system may be able to supply the power required by the camera and transmitter.

Now, where the hell did I stick that can of Raid?

The artificial morality of the robot warrior

Great strides have been made in recent years in the development of combat robots. The US military has deployed ground robots, aerial robots, marine robots, stationary robots, and (reportedly) space robots. The robots are used for both reconnaissance and fighting, and further rapid advances in their design and capabilities can be expected in the years ahead. One consequence of these advances is that robots will gain more autonomy, which means they will have to act in uncertain situations without direct human instruction. That raises a large and thorny challenge: how do you program a robot to be an ethical warrior?

The Times of London this week pointed to an extensive report on military robots, titled Autonomous Military Robotics: Risk, Ethics, and Design, that was prepared in December for the US Navy by the Ethics & Emerging Technologies Group at the California State Polytechnic University. In addition to providing a useful overview of the state of the art in military robots, the report provides a fascinating examination of how software writers might go about programming what the authors call "artificial morality" into machines.

The authors explain why it's imperative that we begin to explore robot morality:

Perhaps robot ethics has not received the attention it needs, at least in the US, given a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when computers were simpler and their programs could be written and understood by a single person. Now, programs with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways ... Furthermore, increasing complexity may lead to emergent behaviors, i.e., behaviors not programmed but arising out of sheer complexity.

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The authors also note that "military robotics have already failed on the battlefield, creating concerns with their deployment (and perhaps even more concern for more advanced, complicated systems) that ought to be addressed before speculation, incomplete information, and hype fill the gap in public dialogue." They point to a mysterious 2008 incident when "several TALON SWORDS units—mobile robots armed with machine guns—in Iraq were reported to be grounded for reasons not fully disclosed, though early reports claim the robots, without being commanded to, trained their guns on ‘friendly’ soldiers; and later reports denied this account but admitted there had been malfunctions during the development and testing phase prior to deployment." They also report that in 2007 "a semi‐autonomous robotic cannon deployed by the South African army malfunctioned, killing nine ‘friendly’ soldiers and wounding 14 others." These failures, along with some spectacular failures of robotic systems in civilian applications, raise "a concern that we ... may not be able to halt some (potentially‐fatal) chain of events caused by autonomous military systems that process information and can act at speeds incomprehensible to us, e.g., with high‐speed unmanned aerial vehicles."

In the section of the report titled "Programming Morality," the authors describe some of the challenges of creating the software that will ensure that robotic warriors act ethically on the battlefield:

Engineers are very good at building systems to satisfy clear task specifications, but there is no clear task specification for general moral behavior, nor is there a single answer to the question of whose morality or what morality should be implemented in AI ...

The choices available to systems that possess a degree of autonomy in their activity and in the contexts within which they operate, and greater sensitivity to the moral factors impinging upon the course of actions available to them, will eventually outstrip the capacities of any simple control architecture. Sophisticated robots will require a kind of functional morality, such that the machines themselves have the capacity for assessing and responding to moral considerations. However, the engineers that design functionally moral robots confront many constraints due to the limits of present‐day technology. Furthermore, any approach to building machines capable of making moral decisions will have to be assessed in light of the feasibility of implementing the theory as a computer program.

After reviewing a number of possible approaches to programming a moral sense into machines, the authors recommend an approach that combines the imposition of "top-down" rules with the development of a capacity for "bottom-up" learning:

A top‐down approach would program rules into the robot and expect the robot to simply obey those rules without change or flexibility. The downside ... is that such rigidity can easily lead to bad consequences when events and situations unforeseen or insufficiently imagined by the programmers occur, causing the robot to perform badly or simply do horrible things, precisely because it is rule‐bound.

A bottom‐up approach, on the other hand, depends on robust machine learning: like a child, a robot is placed into variegated situations and is expected to learn through trial and error (and feedback) what is and is not appropriate to do. General, universal rules are eschewed. But this too becomes problematic, especially as the robot is introduced to novel situations: it cannot fall back on any rules to guide it beyond the ones it has amassed from its own experience, and if those are insufficient, then it will likely perform poorly as well.

As a result, we defend a hybrid architecture as the preferred model for constructing ethical autonomous robots. Some top‐down rules are combined with machine learning to best approximate the ways in which humans actually gain ethical expertise ... The challenge for the military will reside in preventing the development of lethal robotic systems from outstripping the ability of engineers to assure the safety of these systems.

The development of autonomous robot warriors stirs concerns beyond just safety, the authors acknowledge:

Some have [suggested that] the rise of such autonomous robots creates risks that go beyond specific harms to societal and cultural impacts. For instance, is there a risk of (perhaps fatally?) affronting human dignity or cherished traditions (religious, cultural, or otherwise) in allowing the existence of robots that make ethical decisions? Do we ‘cross a threshold’ in abrogating this level of responsibility to machines, in a way that will inevitably lead to some catastrophic outcome? Without more detail and reason for worry, such worries as this appear to commit the ‘slippery slope’ fallacy. But there is worry that as robots become ‘quasi‐persons,' even under a ‘slave morality’, there will be pressure to eventually make them into full‐fledged Kantian‐autonomous persons, with all the risks that entails. What seems certain is that the rise of autonomous robots, if mishandled, will cause popular shock and cultural upheaval, especially if they are introduced suddenly and/or have some disastrous safety failures early on.

The good news, according to the authors, is that emotionless machines have certain built-in ethical advantages over human warriors. "Robots," they write, "would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost." Of course, this raises deeper issues, which the authors don't address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice - with all the messiness that goes with it?

The avatar of my father

February 16, 2009

HORATIO: O day and night, but this is wondrous strange.

The Singularity - the prophesied moment when artificial intelligence leaps ahead of human intelligence, rendering man both obsolete and immortal - has been jokingly called "the rapture of the geeks." But to Ray Kurzweil, the most famous of the Singularitarians, it's no joke. In a profile in the current issue of Rolling Stone (not available online), Kurzweil describes how, in the wake of the Singularity, it will become possible not only to preserve living people for eternity (by uploading their minds into computers) but to resurrect the dead.

Kurzweil looks forward in particular to his reunion with his beloved father, Fredric, who died in 1970. "Kurzweil's most ambitious plan for after the Singularity," writes Rolling Stone's David Kushner, "is also his most personal":

Using technology, he plans to bring his dead father back to life. Kurzweil reveals this to me near the end of our conversation ... In a soft voice, he explains how the resurrection would work. "We can find some of his DNA around his grave site - that's a lot of information right there," he says. "The AI will send down some nanobots and get some bone or teeth and extract some DNA and put it all together. Then they'll get some information from my brain and anyone else who still remembers him."

When I ask how exactly they'll extract the knowledge from his brain, Kurzweil bristles, as if the answer should be obvious: "Just send nanobots into my brain and reconstruct my recollections and memories." The machines will capture everything: the piggyback ride to the grocery store, the bedtime reading of Tom Swift, the moment he and his father rejoiced when the letter of acceptance from MIT arrived. To provide the nanobots with even more information, Kurzweil is safeguarding the boxes of his dad's mementos, so the artificial intelligence has as much data as possible from which to reconstruct him. Father 2.0 could take many forms, he says, from a virtual-reality avatar to a fully functioning robot ... "If you can bring back life that was valuable in the past, it should be valuable in the future."

There's a real poignancy to Kurzweil's dream of bringing his dad back to life by weaving together strands of DNA and strands of memory. I could imagine a novel - by Ray Bradbury, maybe - constructed around his otherworldly yearning. Death makes strange even the most rational of minds.

Cloud gazing

February 12, 2009

For those of you who just can't get enough of this cloud thing, here's some weekend reading. Berkeley's Reliable Adaptive Distributed Systems Laboratory - the RAD Lab, as it's groovily known - has a new white paper, Above the Clouds: A Berkeley View of Cloud Computing, that examines the economics of the cloud model, from both a user's and a supplier's perspective, and lays out the opportunities and obstacles that will likely shape the development of the industry in the near to medium term. And, in the new issue of IEEE Spectrum, Randy Katz surveys the state of the art in the construction of cloud data centers.

Another little IBM deal

February 11, 2009

On August 12, 1981, 28 long years ago, IBM introduced its personal computer, the IBM PC. Hidden inside was an operating system called MS-DOS which the computing giant had licensed from a pipsqueak company named Microsoft. IBM didn't realize it at the time, but the deal, which allowed Microsoft to maintain its ownership of the operating system and to license it to other companies, turned out to be the seminal event in defining the commercial landscape for the computing business throughout the ensuing PC era. IBM, through the deal, anointed Microsoft as the dominant company of that era.

Today, as a new era in computing dawns, IBM announced another deal, this time with Amazon Web Services, a pipsqueak in the IT business but an early leader in cloud computing. Under the deal, corporations and software developers will be able to run IBM's commercial software in Amazon's cloud. As the Register's Timothy Prickett Morgan reports, "IBM announced that it would be deploying a big piece of its database and middleware software stack on Amazon's Elastic Compute Cloud (EC2) service. The software that IBM is moving out to EC2 includes the company's DB2 and Informix Dynamic Server relational databases, its WebSphere Portal and sMash mashup tools, and its Lotus Web Content Management program ... The interesting twist on the Amazon-IBM deal is that Big Blue is going to let companies that have already bought software licenses run that software out on the EC2 cloud, once the offering is generally available."

Prickett Morgan also notes, "If compute clouds want to succeed as businesses instead of toys, they have to run the same commercial software that IT departments deploy internally on their own servers. Which is why [the] deal struck between IBM and Amazon's Web Services subsidiary is important, perhaps more so for Amazon than for Big Blue."

It doesn't seem like such a big deal, and it probably isn't. But you never know. The licensing of MS-DOS seemed like small potatoes when it happened. Could the accidental kingmaker have struck again?

UPDATE: Dana Gardner speculates on the upshot.

The automatically updatable book

Your library has been successfully updated.
The next update is scheduled for 09:00 tomorrow.
Click this message to continue reading.

One of the things that happens when books and other writings start to be distributed digitally through web-connected devices like the Kindle is that their text becomes provisional. Automatic updates can be sent through the network to edit the words stored in your machine - similar to the way that, say, software on your PC can be updated automatically today. This can, obviously, be a very useful service. If you buy a tourist guide to a city and one of the restaurants it recommends goes out of business, the recommendation can easily be removed from all the electronic versions of the guide. So you won't end up heading off to a restaurant that doesn't exist - something that happens fairly regularly with printed guides, particularly ones that are a few years old. If the city guide is published only in electronic form through connected devices, the old recommendation in effect disappears forever - it's erased from the record. It's as though the recommendation was never made.

Which is okay for quidebooks, but what about for other books? If you look ahead, speculatively, to a time when more and more books start being published only in electronic versions and distributed through Kindles, smartphones, PCs, and other connected devices, does history begin to become as provisional as the text in the books? Stephanie at UrbZen sketches out the dark scenario:

Consider that for everything we gain with a Kindle—convenience, selection, immediacy—we’re losing something too. The printed word—physically printed, on paper, in a book—might be heavy, clumsy or out of date, but it also provides a level of permanence and privacy that no digital device will ever be able to match. In the past, restrictive governments had to ban whole books whose content was deemed too controversial, inflammatory or seditious for the masses. But then at least you knew which books were being banned, and, if you could get your hands on them, see why. Censorship in the age of the Kindle will be more subtle, and much more dangerous.

Consider what might happen if a scholar releases a book on radical Islam exclusively in a digital format. The US government, after reviewing the work, determines that certain passages amount to national security threat, and sends Amazon and the publisher national security letters demanding the offending passages be removed. Now not only will anyone who purchases the book get the new, censored copy, but anyone who had bought the book previously and then syncs their Kindle with Amazon—to buy another book, pay a bill, whatever—will, probably unknowingly, have the old version replaced by the new, “cleaned up” version on their device. The original version was never printed, and now it’s like it didn’t even exist. What’s more, the government now has a list of everyone who downloaded both the old and new versions of the book.

Stephanie acknowledges that this scenario may come off as "a crazy conspiracy theory spun by a troubled mind with an overactive imagination." And maybe that's what it is. Still, she's right to raise the issue. The unanticipated side effects of new technologies often turn out to be their most important effects. Printed words are permanent. Electronic words are provisional. The difference is vast and the implications worth pondering.

The writing is on the paywall

February 10, 2009

There has been much interesting speculation about the future of the newspaper business in recent weeks. There was Michael Hirschorn's pre-obituary for the print edition of the New York Times in The Atlantic. He foresees the Times shrinking into "a bigger, better, and less partisan version of the Huffington Post." There was the Times's David Carr running the old micropayments idea up the flagpole. Look to iTunes, he suggested, for a model of how "to perform a cashectomy on users." In a Time cover story, Walter Isaacson also endorsed the development of "an iTunes-easy method of micropayment [that] will permit impulse purchases of a newspaper, magazine, article, blog or video for a penny, nickel, dime or whatever the creator chooses to charge." In a memo posted at Poynter Online, Steve Brill argued that newspapers, the Times in particular, need to abandon the practice of giving away their stories online and begin charging for access to their content, either through pay-as-you-go micropayments or through various sorts of subscriptions.

Shadowing the discussion, naturally, have been anti-paper agitators like Clay Shirky and Jeff Jarvis. To them, the renewal of talk about asking folks to - gasp! chuckle! guffaw! - pay for content is yet more evidence of the general cluelessness of the dead-tree crowd, who are simply too dim to realize that publishers have been rendered impotent and it's the "users" now who call all the shots. "Back in the real world," says Shirky, "the media business is being turned upside down by our new freedoms and our new roles. We’re not just readers anymore, or listeners or viewers. We’re not customers and we’re certainly not consumers. We’re users. We don’t consume content, we use it, and mostly what we use it for is to support our conversations with one another, because we’re media outlets now too." Consumers pay; users don't.

Shirky argues, in particular, that micropayments won't work. "The essential thing to understand about small payments is that users don’t like being nickel-and-dimed. We have the phrase ‘nickel-and-dimed’ because this dislike is both general and strong." I think Shirky is right. (He wrote a seminal paper on micropayments some years ago.) But I also think he overstates his case. The clue comes in his misinterpretation of the phrase "nickel-and-dimed." We say we're being nickel-and-dimed when a company charges us lots of small, frivolous fees for stuff that has no value to us. The classic example is a bank charging for every check you write or every ATM withdrawal you make. We don't say we're being nickel-and-dimed when we buy a product we want for a very low price - a pack of gum, say, or a postage stamp. Spending a nickel or a dime (or a quarter or a dollar) for something you want is not an annoyance. It's a purchase.

Shirky's need to see all forms of micropayments as dead ends leads him into a tortured attempt to dismiss Apple's success at selling songs for less than a buck a pop through iTunes. "People are not paying for music on ITMS because we have decided that fee-per-track is the model we prefer," he writes, "but because there is no market in which commercial alternatives can be explored." Huh? Au contraire: a whole lot of people have indeed decided that they don't mind paying a small fee to purchase a song. There are other music-sales models out there, various forms of subscriptions, most notably, and some, like eMusic, have had some success, while others have failed spectacularly. Nearly all the music for sale at iTunes is also available for free through services that facilitate illicit downloading. A huge amount of music continues to be trafficked that way, but nevertheless Apple's experience demonstrates that a sizable market exists for purchasing media products piecemeal at small prices. I can pretty much guarantee that if Apple were to start charging 10 cents, or 5 cents, for a track, they would actually sell a lot more of them. Buyers wouldn't, in other words, run away, screaming "don't nickel-and-dime me!", because they find spending such tiny amounts a horrible hassle. They'd buy more. The iTunes store, and Amazon's music store, demonstrates that consumers can be trained to spend small amounts of money for products and services they desire.

Still, I don't see micropayments working for news. Most news stories, for one thing, are transitory, disposable things. That makes them very different from songs, which we buy because we want to "own" them, to have the ability to play them over and over again. We don't want to own news stories; we just want to read them or glance over them. Hawking stories piecemeal is a harder sell than hawking tunes; the hassle factor is more difficult to overcome. Second, news stories are - and I'm speaking very generally here - more fungible than songs. If you want the Kings of Leon's "Sex on Fire," you want the Kings of Leon's "Sex on Fire." A wimpy Coldplay number just ain't going to scratch that itch. But while there are certainly differences in quality among news stories on the same subject, sometimes very great differences, they may not matter for people looking for a quick synopsis of the facts, particularly if the alternatives are being given away free. And most news stories also go out of date very, very quickly. The window during which you'd have any chance of selling one is exceedingly brief. Finally, people don't have any experience buying individual news stories the way they have with buying individual songs (as 45s or cassette singles of CD singles). So the whole concept just seems weird.

Does that mean that a micropayments system absolutely, positively won't work for newspapers? No. But it does mean it's a heck of a longshot and not worth pinning one's hopes on.

So is the idea of getting people to pay for news online an impossible dream? You'd certainly think so reading people like Shirky and Jarvis, who can't wait for old-time newspaper publishers to be dead and buried so we can get on with some vague, communal "reinvention" of news production and distribution. But the freeniacs are wrong. Charging people for news, even online, is by no means an impossible dream. Yes, it often seems like an impossible dream today, but that's because the news market is currently, and massively, distorted. But market distortions have a way of sorting themselves out. Indeed, that's one of the main reasons we have markets.

The essential problem with the newspaper business today is that it is suffering from a huge imbalance between supply and demand. What the Internet has done is broken the geographical constraints on news distribution and flooded the market with stories, with product. Supply so far exceeds demand that the price of the news has dropped to zero. Substitutes are everywhere. To put it another way, the geographical constraints on the distribution of printed news required the fragmentation of production capacity, with large groups of reporters and editors being stationed in myriad local outlets. When the geographical constraints went away, thanks to the Net and the near-zero cost of distributing digital goods anywhere in the world, all that fragmented (and redundant) capacity suddenly merged together into (in effect) a single production pool serving (in effect) a single market. Needless to say, the combined production capacity now far, far exceeds the demand of the combined market.

In this environment, you're about as like to be able to charge for an online news story as you are to charge for air. And the overabundance of supply means, as well, an overabundance of advertising inventory. So not only can't you charge for your product, but you can't make decent ad revenues either. Bad times.

Now here's what a lot of people seem to forget: Excess production capacity goes away, particularly when that capacity consists not of capital but of people. Supply and demand, eventually and often painfully, come back into some sort of balance. Newspapers have, with good reason, been pulling their hair out over the demand side of the business, where a lot of their product has, for the time being, lost its monetary value. But the solution to their dilemma actually lies on the production side: particularly, the radical consolidation and radical reduction of capacity. The number of U.S. newspapers is going to collapse (although we may have differently branded papers produced by the same production operation) and the number of reporters, editors, and other production side employees is going to continue to plummet. And syndication practices, geared to a world of geographic constraints on distribution, will be rethought and, in many cases, abandoned.

As all that happens, market power begins - gasp, chuckle, and guffaw all you want - to move back to the producer. The user no longer gets to call all the shots. Substitutes dry up, the perception of fungibility dissipates, and quality becomes both visible and valuable. The value of news begins, once again, to have a dollar sign beside it.

Shirky claims we're "in a media environment with low barriers to entry for competition." But that's an illusion born of the current supply-demand imbalance. The capital requirements for an online news operation are certainly lower than for a print one, but the labor costs remain high. Reporters, editors, photographers, and other newspaper production workers are skilled professionals who require good and fair pay and benefits and, often, substantial travel allowances. It's a fantasy to believe that the production of all the kinds of news that people value, particularly hard news, can be shifted over to amateurs or journeymen working for peanuts or some newfangled journo-syndicalist communes. Certainly, amateurs and volunteers can do some of the work that used to be done by professional journalists in professional organizations. Free-floating freelancers can also do some of the work. The journo-syndicalist communes will, I suppose, be able to do some of the work. And that's all well and good. But they can't do all of the work, and they certainly can't do all of the most valuable work. The news business will remain a fundamentally commercial operation. Whatever the Internet dreamers might tell you, it ain't going to a purely social production model.

Newspapers are certainly guilty of not battening down the spending hatches early enough. But if you look at, say, the New York Times's emerging "last-man-standing" strategy, as laid out in its issue yesterday, you see a strategy that makes sense, and that actually is built on a rational view of the future. Make sure you have enough cash to ride out the storm, trim your spending, defend your quality and your brand, expand into the new kinds of products and services that the web makes possible and that serve to expand your reader base. And then sit tight and wait for your weaker competitors to fail. As one analyst, looking toward the future, says in the Times story, "'there could be dramatically fewer newspapers,' leaving those that remain in a stronger position to compete for readers and ads. 'And then the New York Times should be a survivor.'"

Once you radically reduce supply in the industry, the demand picture changes radically as well. Ad inventory goes down, and ad rates go up. And things that seem unthinkable now - online subscription fees - suddenly become feasible. We also, at that point, get disabused of the fantasy that there's no such thing as news consumers. We see that providing fodder for "conversations" is not the primary value of the news; it's an important value, but it's a secondary value. The newspaper industry is in the midst of a fundamental restructuring, and if you think that restructuring is over - that what we see today is the end state - you're wrong. Markets for valuable goods do not stay disrupted. They evolve to a new and sustainable commercial state. Tomorrow's reality will be different from today's.

What I'm laying out here isn't a pretty scenario. It means lots of lost jobs - good ones - and lots of failed businesses. The blood will run in the streets, as the chipmakers say when production capacity gets way ahead of demand in their industry. It may not even be good news in the long run. We'll likely end up with a handful of mega-journalistic-entities, probably spanning both text and video, and hence fewer choices. This is what happens on the commercial web: power and money consolidate. But we'll probably also end up with a supply of good reporting and solid news, and we'll probably pay for it.

Big Switch giveaway

February 09, 2009

To mark the publication of the paperback edition of my book The Big Switch, which The Independent last week called "simultaneously lucid and mind-boggling," I'm giving away five signed copies. I will mail a copy to each of the first five people who correctly answer the following three lucid but mind-boggling questions:

1. What fruit was implicated in the death of Alan Turing?

2. Last week, Google attributed its glitch that labeled the entire Web as hazardous to "human error." What famous movie character, describing another computer snafu, said, "It can only be attributable to human error"?

3. What flavor of soft drink is mentioned in the third verse of the final track on Werner Vogels' favorite album of 1969?

The contest is over! Thanks for participating. The answers are:

1. Apple

2. HAL

3. Cherry red

Smackdown

February 08, 2009

A while back, Clay Shirky argued that watching TV is like being an alky and that the Internet is the 12-step cure. Now, Daniel Markham, in his post Technology Is Heroin, says the cure is worse than the disease. If watching television is like sucking on a bottle of gin, using the Net is like mainlining speedballs with a dirty needle. Both men claim to have history on their side. You be the judge.

Older posts

 Subscribe to Rough Type

The Atlantic article:
Is Google Making Us Stupid?

Nick's new book: bigswitchcover2thumb.jpg "Future Shock for the web-apps era" -Fast Company

"Ominously prescient" -Kirkus Reviews

"Riveting stuff" -New York Post

Order from Amazon

Visit Big Switch site

Read Q&A; with Nick

Greatest hits

The amorality of Web 2.0

The engine of serendipity

The editor and the crowd

Avatars consume as much electricity as Brazilians

The great unread

The love song of J. Alfred Prufrock's avatar

Flight of the wingless coffin fly

Sharecropping the long tail

The social graft

Steve's devices

MySpace's vacancy

The dingo stole my avatar

Excuse me while I blog

Other writing

The ignorance of crowds

The recorded life

The end of corporate computing

IT doesn't matter

The parasitic blogger

The sixth force

Hypermediation

More

Nick's last book: Order from Amazon

Visit book site

Rough Type is:

Written and published by
Nicholas Carr

Designed by

JavaScript must be enabled to display this email address.

What?