Fukushima

A good article from the Guardian: Quiet voices must be heard to avert a future Fukushima. Some excerpts:

Japan's part-natural, part-human disaster is an extraordinary event. As well as dealing with the consequences of an earthquake and tsunami, rescuers are having to evacuate thousands of people from the danger zone around Fukushima. In addition, the country is blighted by blackouts from the shutting of 10 or more nuclear plants. It is a textbook case of how technology can increase our vulnerability through unintended side-effects.

Yet there had been early warnings from analysts. In 2006, the Japanese professor Katsuhiko Ishibashi resigned from a nuclear power advisory panel, saying that the policy of building in earthquake zones could lead to catastrophe, and that design standards for proofing them against damage were too lax. Further back, the seminal study of accidents in complex technologies was Charles Perrow's Normal Accidents, published in 1984.

Perrow, a Yale professor, analysed accidents in chemical plants, air traffic control, shipping and dams, as well as his main focus: the 1979 accident at the Three Mile Island nuclear plant in Pennsylvania. Things can go wrong with design, equipment, procedures, operators, supplies and the environment. Occasionally two or more will have problems simultaneously; in a complex technology such as a nuclear plant, the potential for this is ever-present. Perrow took five pages to sketch what went wrong in the first 13 seconds of the incident. He concluded that in complex systems, "no matter how effective conventional safety devices are, there is a form of accident that is inevitable" – hence "normal accidents".

Unfortunately, such events are often made worse by the way the nuclear industry and governments handle the early stages of disasters, as they reassure us that all is fine. Some statements are well intentioned. But as things get worse, people wonder why early reassurances were issued when it is apparent that there was no basis for them. It is simply too early to say what precisely went wrong at Fukushima, and it has been surprising to see commentators speak with such speed and certainty. Most people accept that they will only ever have a rough understanding of the facts. But they instinctively ask if they can trust those in charge and wonder why governments support particular technologies so strongly.

Industry and governments need to be more straightforward with the public. The pretence of knowledge is deeply unscientific; a more humble approach where officials are frank about the unknowns would paradoxically engender greater trust. Likewise, nuclear's opponents need to adopt a measured approach. We need a fuller democratic debate about the choices we are making. Catastrophic potential needs to be a central criterion in decisions about technology. Advice from experts is useful, but the most significant questions are ethical in character.

I've had Normal Accidents on the shelf for a while and figured now was a good time to finally read it. Perrow also published a sequel that just came out in paperback last month: The Next Catastrophe: Reducing our vulnerabilities to natural, industrial, and terrorist disasters.

Radiation Treatment Errors and Bad Design

The New York Times has an excellent investigative report into radiation treatment errors. They tell the story of two patients who died due to errors, and report on the frequency of these events. Sadly the errors usually look preventable in hindsight. And predictably, manufacturers of the machines blame the technicians who operate the machines, when in truth a main cause is bad software design without proper attention to safety and usability practices.

Link: Radiation Offers New Cures, and Ways to do Harm.

The article is the first in a series called The Radiation Boom. This kind of deep reporting is what makes the NYT and organizations like it so valuable.

Michael Sandel on Genetics and Morality

"It is tempting to think that bioengineering our children and ourselves for success in a competitive society is an exercise of freedom. But changing our nature to fit the world, rather than the other way around, is actually the deepest form of disempowerment. It distracts us from reflecting critically on the world. It deadens the impulse to social and political improvement. So I say rather than bioengineer our children and ourselves to fit the world, let's instead create social and political arrangements more hospitable to the gifts and the limitations of the imperfect human beings that we are."

– From the Reith Lectures given earlier this year by Michael Sandel, quoted at Biopolitical Times blog.

Sandel's book about the ethics of genetic engineering just came out in paperback: The Case against Perfection: Ethics in the Age of Genetic Engineering.

Scientists debate dangers of AI

From the New York Times:

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As
examples, the scientists pointed to a number of technologies as diverse
as experimental medical systems that interact with patients to simulate
empathy, and computer worms and viruses that defy extermination and
could thus be said to have reached a “cockroach” stage of machine
intelligence.

While the computer scientists agreed that we are a
long way from Hal, the computer that took over the spaceship in “2001:
A Space Odyssey,” they said there was legitimate concern that
technological progress would transform the work force by destroying a
widening range of jobs, as well as force humans to learn to live with
machines that increasingly copy human behaviors.

The researchers
— leading computer scientists, artificial intelligence researchers and
roboticists who met at the Asilomar Conference Grounds on Monterey Bay
in California — generally discounted the possibility of highly
centralized superintelligences and the idea that intelligence might
spring spontaneously from the Internet. But they agreed that robots
that can kill autonomously are either already here or will be soon.

[…]

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

Link: Scientists Worry Machines May Outsmart Man

New Books

Some recent books I've bought or spotted:

Peepdiaries Hal Niedzviecki's The Peep Diaries: How We're Learning to Love Watching Ourselves and Our Neighbors
looks at oversharing in the digital age. Naturally he has a blog, a twitter account, a webcam, a forthcoming documentary, and much more at the book's site.

From the book description:

We have entered the age of "peep culture": a tell-all, show-all,
know-all digital phenomenon that is dramatically altering notions of
privacy, individuality, security, and even humanity. Peep culture is
reality TV, YouTube, MySpace, Facebook, Twitter, over-the-counter spy
gear, blogs, chat rooms, amateur porn, surveillance technology, Dr. Phil, Borat,
cell phone photos of your drunk friend making out with her
ex-boyfriend, and more. In the age of peep, core values and rights we
once took for granted are rapidly being renegotiated, often without our
even noticing.

[…] Part travelogue, part diary, part
meditation and social history, The Peep Diaries explores a
rapidly emerging digital phenomenon that is radically changing not just
the entertainment landscape, but also the firmaments of our culture and
society.

Richard SennettCraftsman's The Craftsman, just out in paperback, seems like a broad hybrid of sociology, psychology, history, cultural studies and philosophy. I've only read a couple chapters, and while it's not the quickest read, I'm finding it compelling as it combines a lot of things I'm interested in. In the book's prologue (about half of which you can read in the Amazon preview) he says that the book is the first of a planned "Pandora" trilogy. It sounds ambitious, though he seems mightily prolific. He writes:

This is the first of three books on material culture, all related to the dangers in Pandora's casket, though each is intended to stand on its own. This book is about craftsmanship, the skill of making things well. The second volume addresses the crafting of rituals that manage aggression and zeal; the third explores the skills required in making and inhabiting sustainable environments. All three books address the issue of technique–but technique considered as a cultural issue rather than as a mindless procedure; each book is about a technique for conducting a particular way of life. The large project contains a personal paradox that I have tried to put to productive use. I am a philosophically minded writer asking questions about such matters as woodworking, military drills, or solar panels.

AndThenTheresThis Bill Wasik, an editor at Harper's and apparently the inventor of the flash mob, has a new book called And Then There's This: How Stories Live and Die in Viral Culture. From the description:

And Then There’s This is Bill Wasik’s
journey along the unexplored frontier of the twenty-first century’s
rambunctious new-media culture. He covers this world in part as a
journalist, following “buzz bands” as they rise and fall in the online
music scene, visiting with viral marketers and political trendsetters
and online provocateurs. But he also wades in as a participant,
conducting his own hilarious experiments: an e-mail fad (which turned
into the worldwide “flash mob” sensation), a viral website in a
monthlong competition, a fake blog that attempts to create “antibuzz,”
and more. He doesn’t always get the results he expected, but he tries
to make sense of his data by surveying what real social science
experiments have taught us about the effects of distraction,
stimulation, and crowd behavior on the human mind. Part report, part
memoir, part manifesto, part deconstruction of a decade, And Then There’s This captures better than any other book the way technology is transforming our culture.

AtLeastInTheCity Wade Rouse's (third) memoir At Least in the City Someone Would Hear Me Scream: Misadventures in Search of the Simple Life tells the story of his trying to become a self-described “modern-day Thoreau.” Sounds fairly amusing, and I like the cover.

In a slightly similar vein is One Square Inch of Silence: One Man's Search for Natural Silence in a Noisy World by Gordon Hempton. Hempton is an "acoustic ecologist" and writes about his experiences recording the quietest places in the country. The book comes with a CD and is an outgrowth of the One Square Inch project, which seeks to preserve a quiet space in Olympic National Park.

Dan Lyons on Singularity Man Ray Kurzweil

Dan Lyons (formerly Fake Steve Jobs) has an article about Ray Kurzweil, who is behind the new Singularity University and whose book The Singularity is Near will soon be a movie, in Newsweek. Excerpt:

Ray Kurzweil's wildest dream is to be turned into a cyborg—a
flesh-and-blood human enhanced with tiny embedded computers, a
man-machine hybrid with billions of microscopic nanobots coursing
through his bloodstream. And there's a moment, halfway through a
conversation in his office in Wellesley, Mass., when I start to think
that Kurzweil's transformation has already begun. It's the way he
talks—in a flat, robotic monotone. Maybe it's just because he's been
giving the same spiel, over and over, for years now. He does 70
speeches annually at $30,000 a pop, and draws crowds of adoring fans
who worship him as a kind of prophet. Kurzweil is a legend in the world
of computer geeks, an inventor, author and computer scientist who bills
himself as a futurist. The ideas he's espousing are as radical as
anything you've ever heard. But the strangest thing about Ray Kurzweil
is that when you sit down for a one-on-one chat with him, he's
absolutely boring.

Listen closely, though, and you may
be slightly terrified. Kurzweil believes computer intelligence is
advancing so rapidly that in a couple of decades, machines will be as
intelligent as humans. Soon after that they will surpass humans and
start creating even smarter technology. By the middle of this century,
the only way for us to keep up will be to merge with the machines so
that their superior intelligence can boost our weak little brains and
beef up our pitiful, illness-prone bodies. Some of Kurzweil's fellow
futurists believe these superhuman computers will want nothing to do
with us—that we will become either their pets or, worse yet, their
food. Always an optimist, Kurzweil takes a more upbeat view. He swears
these superhuman computers will love us, and honor us, since we'll be
their ancestors. He also thinks we'll be able to embed our
consciousness into silicon, which means we can live on, inside
machines, forever and ever, amen.

Link: Ray Kurzweil Wants to Be a Robot.

See also this companion article by John Horgan: Ray Kurzweil's Science Cult.