Why some people don’t care about information overload

A post by business writer Tom Davenport at a Harvard Business Review blog explains it all for us:

I gave a presentation this week on decision-making, and someone in the
audience asked me if I thought information overload was an impediment
to effective decision-making. "Information overload…yes, I remember
that concept. But no one cares about it anymore," I replied. In fact,
nobody ever did.

He offers a few shaky reasons for why information overload is not a problem, then concludes:

So the next time you hear someone talking or read someone writing about
information overload, save your own attention and tune that person out.
Nobody's ever going to do anything about this so-called problem, so
don't overload your own brain by wrestling with the issue.

Link: Why we don't care about information overload.

Wow. It's the kind of inane, superficial article I'd expect from somebody trying to write with one eye on their blackberry.

For some intelligent material on the topic, I recommend the Information Overload Research Group and Nathan Zeldes's blog Challenge Information Overload.

Relying on Google a little too much

Michael Zimmer has an amusing/scary story about a student's unquestioning use of Google: it's reported at Crooked Timber and Michael's blog (which appears to be down).

Speaking of Google, I just learned of Google's holiday card offer. If you can't be bothered to send a snail mail card to your pathetic relatives who are "stuck in the pre-digital age" then Google will do it for you (except that they've run out already). And, yes, that's just the way they describe it.

In privacy news, Eric Schmidt apparently forgot his talking points and said this in an interview: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." (quoted at Gawker; here's a response from security expert Bruce Schneier.)

Technology and the decline of copy editing

On The Media had a good interview this weekend with John McIntyre, a former newspaper copy editor, and one of many who have lost their jobs recently due to budget cuts. He talks about the increase in errors and reader complaints at newspapers as a result of the layoffs.

One reason they're are among the first to go is that their work is less visible than that of, say, reporters. Another reason is that, on the Internet, readers just "don't expect things to be accurate or very well done and therefore they are used to tolerating a much higher level of shoddy work and a much greater volume of errors, and therefore you can sacrifice quality on the web and it doesn't mean that much." McIntyre points out that the work of copy editors is much more than just fixing typos, though, and has caught cases of plagiarism, falsification, and libel.

Link: Newspaper Leighoffs (On The Media)

A related article by the ombudsman at the Washington Post: Declining Editing Staff Leads to Rise in Errors.

John McIntyre's new blog: You Don't Say.

Scientists debate dangers of AI

From the New York Times:

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As
examples, the scientists pointed to a number of technologies as diverse
as experimental medical systems that interact with patients to simulate
empathy, and computer worms and viruses that defy extermination and
could thus be said to have reached a “cockroach” stage of machine
intelligence.

While the computer scientists agreed that we are a
long way from Hal, the computer that took over the spaceship in “2001:
A Space Odyssey,” they said there was legitimate concern that
technological progress would transform the work force by destroying a
widening range of jobs, as well as force humans to learn to live with
machines that increasingly copy human behaviors.

The researchers
— leading computer scientists, artificial intelligence researchers and
roboticists who met at the Asilomar Conference Grounds on Monterey Bay
in California — generally discounted the possibility of highly
centralized superintelligences and the idea that intelligence might
spring spontaneously from the Internet. But they agreed that robots
that can kill autonomously are either already here or will be soon.

[…]

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

Link: Scientists Worry Machines May Outsmart Man