Hello 2015

Work_dangerThis is an archive of Question Technology, a blog that existed from about 2005-2011. It’s currently dormant but perhaps will live again. Back then (almost) nobody was critical. Now we have more critics…

This header illustration was copied from a site I can no longer find. :(

The Internet Intellectual (Morozov on Jarvis)

A fairly devastating takedown of Jeff Jarvis's new book Public Parts by Evgeny Morozov (author of The Net Delusion):

http://www.tnr.com/print/article/books/magazine/96116/the-internet-intellectual (print version – should not require sign-in).

I almost feel bad for Jarvis. It seems like a solid critique, and tackles not only Jarvis but other Internet utopians (e.g. Clay Shirky), but it's perhaps a little mean-spirited.


A good article from the Guardian: Quiet voices must be heard to avert a future Fukushima. Some excerpts:

Japan's part-natural, part-human disaster is an extraordinary event. As well as dealing with the consequences of an earthquake and tsunami, rescuers are having to evacuate thousands of people from the danger zone around Fukushima. In addition, the country is blighted by blackouts from the shutting of 10 or more nuclear plants. It is a textbook case of how technology can increase our vulnerability through unintended side-effects.

Yet there had been early warnings from analysts. In 2006, the Japanese professor Katsuhiko Ishibashi resigned from a nuclear power advisory panel, saying that the policy of building in earthquake zones could lead to catastrophe, and that design standards for proofing them against damage were too lax. Further back, the seminal study of accidents in complex technologies was Charles Perrow's Normal Accidents, published in 1984.

Perrow, a Yale professor, analysed accidents in chemical plants, air traffic control, shipping and dams, as well as his main focus: the 1979 accident at the Three Mile Island nuclear plant in Pennsylvania. Things can go wrong with design, equipment, procedures, operators, supplies and the environment. Occasionally two or more will have problems simultaneously; in a complex technology such as a nuclear plant, the potential for this is ever-present. Perrow took five pages to sketch what went wrong in the first 13 seconds of the incident. He concluded that in complex systems, "no matter how effective conventional safety devices are, there is a form of accident that is inevitable" – hence "normal accidents".

Unfortunately, such events are often made worse by the way the nuclear industry and governments handle the early stages of disasters, as they reassure us that all is fine. Some statements are well intentioned. But as things get worse, people wonder why early reassurances were issued when it is apparent that there was no basis for them. It is simply too early to say what precisely went wrong at Fukushima, and it has been surprising to see commentators speak with such speed and certainty. Most people accept that they will only ever have a rough understanding of the facts. But they instinctively ask if they can trust those in charge and wonder why governments support particular technologies so strongly.

Industry and governments need to be more straightforward with the public. The pretence of knowledge is deeply unscientific; a more humble approach where officials are frank about the unknowns would paradoxically engender greater trust. Likewise, nuclear's opponents need to adopt a measured approach. We need a fuller democratic debate about the choices we are making. Catastrophic potential needs to be a central criterion in decisions about technology. Advice from experts is useful, but the most significant questions are ethical in character.

I've had Normal Accidents on the shelf for a while and figured now was a good time to finally read it. Perrow also published a sequel that just came out in paperback last month: The Next Catastrophe: Reducing our vulnerabilities to natural, industrial, and terrorist disasters.

Alone Together

AloneTogether I just finished reading Sherry Turkle's new book, Alone Together: Why we expect more from technology and less from each other (book website, Amazon) and I can't recommend it highly enough. She reports on her research into how people experience social media and social robots, and asks many important questions about where we're headed. I found the second half of the book, on social media, more compelling than the first, on robots, though Turkle's analysis does bring the two topics together nicely.

Growing Up Distracted

The New York Times has continued its "Your Brain On Computers" series with a piece on the digital distractions facing today's students: Growing Up Digital, Wired for Distraction.

Speaking of distractions, here is how the story is presented in my browser. Can you find it?


The twitterati are already having conniptions over the article. Cue the breathless rebuttals from people who apparently didn't read it carefully. It's the next Zadie Smith/Facebook. (Example of that sort of rebuttal: Literary Writers and Social Media: A Response to Zadie Smith.)

Robot teachers are on the way

There's an interesting article in the New York Times today about robotic teachers. An excerpt:

Researchers say the pace of innovation is such that these machines
should begin to learn as they teach, becoming the sort of infinitely
patient, highly informed instructors that would be effective in subjects
like foreign language or in repetitive therapies used to treat
developmental problems like autism.

Several countries have been testing teaching machines in classrooms.
South Korea, known for its enthusiasm for technology, is “hiring”
hundreds of robots as teacher aides and classroom playmates and is
experimenting with robots that would teach English.

Already, these advances have stirred dystopian visions, along with the
sort of ethical debate usually confined to science fiction. “I worry
that if kids grow up being taught by robots and viewing technology as
the instructor,” said Mitchel Resnick, head of the Lifelong Kindergarten
group at the Media Laboratory at the Massachusetts Institute of Technology, “they will
see it as the master.”

Most computer scientists reply that they have neither the intention, nor
the ability, to replace human teachers. The great hope for robots, said
Patricia Kuhl, co-director of the Institute for Learning and Brain
Sciences at the University
of Washington, “is that with the right kind of technology at a
critical period in a child’s development, they could supplement learning
in the classroom.”

Link: Students, Meet Your New Teacher, Mr. Robot.

I don't think you can fault the individual computer scientists' intentions here, and it may well be that robots offer unique value in certain special situations like working with autistic children. But I have to agree with those who find this trend disturbing. I don't think Resnick's worry about seeing robots "as the master" is the worst problem. Our society values technology more than it values teachers. These robots aren't solving a problem that couldn't be solved better with people. And down the road it's not hard to see the day when cheap robots become much more than just a supplement.

To repeat a quote I posted 5 years ago:

the end, it is the poor who will be chained to the computer; the rich
will get teachers."

Stephen Kindel, quoted by Todd Oppenheimer in The Flickering Mind: Saving Education From the False Promise of Technology.