The Internet Intellectual (Morozov on Jarvis)

A fairly devastating takedown of Jeff Jarvis's new book Public Parts by Evgeny Morozov (author of The Net Delusion):

http://www.tnr.com/print/article/books/magazine/96116/the-internet-intellectual (print version – should not require sign-in).

I almost feel bad for Jarvis. It seems like a solid critique, and tackles not only Jarvis but other Internet utopians (e.g. Clay Shirky), but it's perhaps a little mean-spirited.

Growing Up Distracted

The New York Times has continued its "Your Brain On Computers" series with a piece on the digital distractions facing today's students: Growing Up Digital, Wired for Distraction.

Speaking of distractions, here is how the story is presented in my browser. Can you find it?

NYTdistraction

The twitterati are already having conniptions over the article. Cue the breathless rebuttals from people who apparently didn't read it carefully. It's the next Zadie Smith/Facebook. (Example of that sort of rebuttal: Literary Writers and Social Media: A Response to Zadie Smith.)

Robot teachers are on the way

There's an interesting article in the New York Times today about robotic teachers. An excerpt:

Researchers say the pace of innovation is such that these machines
should begin to learn as they teach, becoming the sort of infinitely
patient, highly informed instructors that would be effective in subjects
like foreign language or in repetitive therapies used to treat
developmental problems like autism.

Several countries have been testing teaching machines in classrooms.
South Korea, known for its enthusiasm for technology, is “hiring”
hundreds of robots as teacher aides and classroom playmates and is
experimenting with robots that would teach English.

Already, these advances have stirred dystopian visions, along with the
sort of ethical debate usually confined to science fiction. “I worry
that if kids grow up being taught by robots and viewing technology as
the instructor,” said Mitchel Resnick, head of the Lifelong Kindergarten
group at the Media Laboratory at the Massachusetts Institute of Technology, “they will
see it as the master.”

Most computer scientists reply that they have neither the intention, nor
the ability, to replace human teachers. The great hope for robots, said
Patricia Kuhl, co-director of the Institute for Learning and Brain
Sciences at the University
of Washington, “is that with the right kind of technology at a
critical period in a child’s development, they could supplement learning
in the classroom.”

Link: Students, Meet Your New Teacher, Mr. Robot.

I don't think you can fault the individual computer scientists' intentions here, and it may well be that robots offer unique value in certain special situations like working with autistic children. But I have to agree with those who find this trend disturbing. I don't think Resnick's worry about seeing robots "as the master" is the worst problem. Our society values technology more than it values teachers. These robots aren't solving a problem that couldn't be solved better with people. And down the road it's not hard to see the day when cheap robots become much more than just a supplement.

To repeat a quote I posted 5 years ago:

"In
the end, it is the poor who will be chained to the computer; the rich
will get teachers."

Stephen Kindel, quoted by Todd Oppenheimer in The Flickering Mind: Saving Education From the False Promise of Technology.

Radiation Treatment Errors and Bad Design

The New York Times has an excellent investigative report into radiation treatment errors. They tell the story of two patients who died due to errors, and report on the frequency of these events. Sadly the errors usually look preventable in hindsight. And predictably, manufacturers of the machines blame the technicians who operate the machines, when in truth a main cause is bad software design without proper attention to safety and usability practices.

Link: Radiation Offers New Cures, and Ways to do Harm.

The article is the first in a series called The Radiation Boom. This kind of deep reporting is what makes the NYT and organizations like it so valuable.

Why some people don’t care about information overload

A post by business writer Tom Davenport at a Harvard Business Review blog explains it all for us:

I gave a presentation this week on decision-making, and someone in the
audience asked me if I thought information overload was an impediment
to effective decision-making. "Information overload…yes, I remember
that concept. But no one cares about it anymore," I replied. In fact,
nobody ever did.

He offers a few shaky reasons for why information overload is not a problem, then concludes:

So the next time you hear someone talking or read someone writing about
information overload, save your own attention and tune that person out.
Nobody's ever going to do anything about this so-called problem, so
don't overload your own brain by wrestling with the issue.

Link: Why we don't care about information overload.

Wow. It's the kind of inane, superficial article I'd expect from somebody trying to write with one eye on their blackberry.

For some intelligent material on the topic, I recommend the Information Overload Research Group and Nathan Zeldes's blog Challenge Information Overload.

Relying on Google a little too much

Michael Zimmer has an amusing/scary story about a student's unquestioning use of Google: it's reported at Crooked Timber and Michael's blog (which appears to be down).

Speaking of Google, I just learned of Google's holiday card offer. If you can't be bothered to send a snail mail card to your pathetic relatives who are "stuck in the pre-digital age" then Google will do it for you (except that they've run out already). And, yes, that's just the way they describe it.

In privacy news, Eric Schmidt apparently forgot his talking points and said this in an interview: "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." (quoted at Gawker; here's a response from security expert Bruce Schneier.)