Waking up to the Web’s Misogyny

If by some chance you haven’t seen any of the discussion this week about the Kathy Sierra affair and about the prevalence of abuse towards women on the web, this article by Joan Walsh today in Salon is a good starting point.  Excerpt:

Is there really any doubt that women writing on the Web are subject
to more abuse than men, simply because they’re women? Really? I’ve been
following the Kathy Sierra blog storm,
thinking I had nothing new to say, but the continued insistence that
Sierra, and those who defend her, are somehow overreacting, or charging
sexism where none exists, makes it hard for a mouthy woman to stay
silent.

I say this as a mouthy woman who has tried for a long time to
pretend otherwise: that Web misogyny isn’t especially rampant — but
even if it is, it has no effect on me, or any other strong, sane woman
doing her job. But I wasn’t being honest. My own reactions and those of
others to the Sierra mess served to wrestle the truth out of me, and it
wasn’t what I hoped.

Link: Men who hate women on the Web | Salon.com.
Also recommended: Annalee Newitz, Who’s afraid of Kathy Sierra?

Google Maps and Responsibility

Google is getting questioned over why they replaced post-Katrina satellite images showing New Orleans devastation with older, pre-Katrina images.  Was it just an innocent mistake?  From today’s SF Chronicle/AP:

Google’s
replacement of post-Hurricane Katrina satellite imagery on its map
portal with images of the region before the storm does a "great
injustice" to the storm’s victims, a congressional subcommittee said.

The House Committee on Science and Technology’s subcommittee on
investigations and oversight on Friday asked Google Inc. Chairman and
CEO Eric Schmidt to explain why his company is using the outdated
imagery.

The subcommittee cited an Associated Press report on the images.

"Google’s use of old imagery appears to be doing the victims of
Hurricane Katrina a great injustice by airbrushing history,"
subcommittee chairman Brad Miller, D-N.C., wrote in a letter to
Schmidt.

Swapping the post-Katrina images and the ruin they revealed for others
showing an idyllic city dumbfounded many locals and even sparked
suspicions that the company and civic leaders were conspiring to
portray the area’s recovery progressing better than it is. […]

After Katrina, Google’s satellite images were in high demand among
exiles and hurricane victims anxious to see whether their homes were
damaged.

Now, though, a virtual trip through New Orleans is a surreal experience
of scrolling across a landscape of packed parking lots and marinas full
of boats.

Reality, of course, is very different: Entire neighborhoods are now
slab mosaics where houses once stood and shopping malls, churches and
marinas are empty of life, many gone altogether.

John Hanke, Google’s director for maps and satellite imagery, said "a
combination of factors including imagery date, resolution, and clarity"
go into deciding what imagery to provide.

"The latest update from one of our information providers substantially
improved the imagery detail of the New Orleans area," Hanke said in a
news release about the switch.

Kovacs said efforts are under way to use more current imagery. […]

Edith Holleman, staff counsel for the House subcommittee, said it would
be useful to understand how Google acquires and manages its imagery
because "people see Google and other Internet engines and it’s almost
like the official word."

Link: Congressional subcommittee criticizes Google pre-Katrina images.

Lauren Weinstein calls it a bum rap:

Greetings. I know a bum rap when I see one. People who should know better — such as the House Committee on Science and Technology’s subcommittee on investigations and oversight chairman Brad Miller, D-North Carolina — are accusing Google of "airbrushing" history on Google Maps.
[…]

Google says that one of their imagery suppliers switched to older data that was higher resolution. Balancing timeliness of data with resolution is a non-trivial task for a mapping site, and in retrospect perhaps some sort of exception should have been carved out for that region when the changes went live, but hindsight is 20/20.

Link: Lauren Weinstein’s Blog: Google Getting a Bum Rap Over Hurricane Katrina Images.

I’m sure this was unintentional, but I don’t think it’s a bad idea to make Google answer questions about it and take some responsibility.  This is not the first time Google has tried to hide behind algorithms in response to questions about controversial results (see earlier posts: Google correctness, Computers are the new authorities).

We know that there’s some human selection going on with satellite images (to remove photos of military sites, for example).  Does Google do any of that or is it all done by the data suppliers?  Google needs to be more open about their processes.  The statement above by Edith Holleman has it exactly right.

On a more technical note, how about a button in Google Maps to show dates on the satellite photos?  That’s clearly an important piece of the information for users to know.

Charles Perrow: The Next Catastrophe

Nextcatastrophe
Sociology Professor Charles Perrow’s book Normal Accidents is a classic study of the risks of complex systems (and I’m ashamed to say I haven’t yet read it).  He has a new book out called The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters.  From the book’s description:

Charles Perrow is famous worldwide for his ideas about normal
accidents, the notion that multiple and unexpected
failures–catastrophes waiting to happen–are built into our society’s
complex systems. In The Next Catastrophe, he offers crucial insights into how to make us safer, proposing a bold new way of thinking about disaster preparedness.

Perrow
argues that rather than laying exclusive emphasis on protecting
targets, we should reduce their size to minimize damage and diminish
their attractiveness to terrorists. He focuses on three causes of
disaster–natural, organizational, and deliberate–and shows that our
best hope lies in the deconcentration of high-risk populations,
corporate power, and critical infrastructures such as electric energy,
computer systems, and the chemical and food industries. Perrow reveals
how the threat of catastrophe is on the rise, whether from terrorism,
natural disasters, or industrial accidents. Along the way, he gives us
the first comprehensive history of FEMA and the Department of Homeland
Security and examines why these agencies are so ill equipped to protect
us.

The Next Catastrophe is a penetrating reassessment of
the very real dangers we face today and what we must do to confront
them. Written in a highly accessible style by a renowned
systems-behavior expert, this book is essential reading for the
twenty-first century. The events of September 11 and Hurricane
Katrina–and the devastating human toll they wrought–were only the
beginning. When the next big disaster comes, will we be ready?

Transhumanists and Race

Today from the baffling world of transhumanist politics (last week’s episode: Transhumanists and IQ): the results of a poll asking "Will the ability to change skin, hair and body make racism irrelevant?"

It’s perhaps a good sign that the majority of these folks apparently aren’t so naive.  55% responded that such an ability might actually make things worse (and 25% said yes).  The moderator (James Hughes?) apparently isn’t happy with that result, writing "Oh my. What a bunch of pessimists."

Update: Bouphonia has a good post about these issues: The New Flesh.

Slow Down, Brave Multitaskers (NYT)

A good article by Steve Lohr in today’s New York Times debunks some of the current mythology surrounding multitasking:

Confident multitaskers of the world, could I have your attention?

Think you can juggle phone calls, e-mail, instant messages and
computer work to get more done in a time-starved world? Read on,
preferably shutting out the cacophony of digital devices for a while.

Several
research reports, both recently published and not yet published,
provide evidence of the limits of multitasking. The findings, according
to neuroscientists, psychologists and management professors, suggest
that many people would be wise to curb their multitasking behavior when
working in an office, studying or driving a car.

These experts
have some basic advice. Check e-mail messages once an hour, at most.
Listening to soothing background music while studying may improve
concentration. But other distractions — most songs with lyrics, instant
messaging, television shows — hamper performance. Driving while talking
on a cellphone, even with a hands-free headset, is a bad idea.

In short, the answer appears to lie in managing the technology, instead of merely yielding to its incessant tug.

“Multitasking
is going to slow you down, increasing the chances of mistakes,” said
David E. Meyer, a cognitive scientist and director of the Brain,
Cognition and Action Laboratory at the University of Michigan. “Disruptions and interruptions are a bad deal from the standpoint of our ability to process information.”

The
human brain, with its hundred billion neurons and hundreds of trillions
of synaptic connections, is a cognitive powerhouse in many ways. “But a
core limitation is an inability to concentrate on two things at once,”
said René Marois, a neuroscientist and director of the Human
Information Processing Laboratory at Vanderbilt University.

[…]

In a recent study, a group of Microsoft
workers took, on average, 15 minutes to return to serious mental tasks,
like writing reports or computer code, after responding to incoming
e-mail or instant messages. They strayed off to reply to other messages
or browse news, sports or entertainment Web sites.

“I was
surprised by how easily people were distracted and how long it took
them to get back to the task,” said Eric Horvitz, a Microsoft research
scientist and co-author, with Shamsi Iqbal of the University of Illinois, of a paper on the study that will be presented next month.

“If it’s this bad at Microsoft,” Mr. Horvitz added, “it has to be bad at other companies, too.”

Link: Slow Down, Multitaskers, and Don’t Read in Traffic – New York Times.

These findings are not completely new, but I guess a new generation of multitaskers needs a new generation of evidence.  And that last point about work interruptions was made twenty years ago in the software management classic Peopleware.

Rational and Irrational Risks

In a Wired essay on the psychology of risk, Bruce Schneier argues that humans are not yet as rational as we should be:

People are not computers. We don’t evaluate security trade-offs
mathematically, by examining the relative probabilities of different
events. Instead, we have shortcuts, rules of thumb, stereotypes and
biases — generally known as "heuristics." […]

When you examine the brain heuristics about risk, security and
trade-offs, you can find evolutionary reasons for why they exist. And
most of them are still very useful.
The problem is that they can fail us, especially in the context of a
modern society. Our social and technological evolution has vastly
outpaced our evolution as a species, and our brains are stuck with
heuristics that are better suited to living in primitive and small
family groups.

And when those heuristics fail, our feeling of security diverges from the reality of security.

Link: Wired News: Human Brain a Poor Judge of Risk.

I think Schneier may be the best writer we have on issues of security, but when it comes to psychology I wonder about his authority.  The current fad for evolutionary psychology has people over-applying it and making all sorts of unscientific conjectures.  (Just to be totally clear: I’m not criticizing evolution, just some aspects of the relatively new field of evolutionary psychology.)  Do we really know so much about people that we can isolate behaviors so cleanly and say definitively that they serve no purpose at all?

Schneier seems to assume that a more orderly, rational brain would necessarily be a better, more evolved one.  He writes:

The human brain is a fascinating organ, but it’s an absolute mess.
Because it has evolved over millions of years, there are all sorts of
processes jumbled together rather than logically organized. Some of the processes are optimized for only certain kinds of
situations, while others don’t work as well as they could. There’s some
duplication of effort, and even some conflicting brain processes.

But these statements are not rational and neutral; they’re judgments coming from a very particular worldview.  One could argue (perhaps even using evolutionary psychology gimmicks) that a messy brain has its advantages, as might all of these other aspects he criticizes.  Do we really know?  Sure it’s good to educate ourselves about risks and try to be more rational, but to suggest that we know how a perfect brain might be built to handle risk seems quite a leap.

Risks of Genetically Modified Mosquitos?

Seed Magazine/AFP reports:

US researchers have created genetically-modified mosquitoes resistant to a malaria parasite, raising the possibility of one day stopping the spread of the disease, a new study says.
[…]

The research offers a way of controlling malaria by introducing the genetically altered insects into the wild and having them take over from their natural cousins.

Link: Seed: Scientists Create Mosquito Resistant to Malaria.

Benjamin Cohen at The World’s Fair wisely observes:

Perhaps doing so isn’t such a good idea. Perhaps ecological awareness
would suggest that the consequences of such a move are not entirely
understandable by us. The problem may not be solvable with strictly
technical means.

Link: The World’s Fair – Misguided Science of the Day

He also links to this Onion article from 2001: New Technological Breakthrough To Fix Problems of Previous Breakthrough.  Very nice.

Excerpts from The No-Nonsense Guide to Science

At the Post-Normal Times blog, Jerry Ravetz has posted excerpts from his book, The No-Nonsense Guide to Science (described here previously).  An excerpt of an excerpt:

The decline of the illusion of objectivity

Over the last half-century, science has experienced great transformations in its scale, size, power, destructiveness, and corporate control and social responsibility. There is lively debate over many policy issues concerning health and the environment, and over proposed innovations such as those in the GRAINN set. But until we get over the illusion of objectivity of science, as embodied in its supposed certainty and value-freedom, those debates will be hindered and distorted. So long as each side in a debate believes that it has all the simple and conclusive facts, it will demonise the other, and dialogue will not be achieved. We need not fall into some nihilistic philosophy of total subjectivity or power-games. That is not the only alternative to the lost illusion of perfect objectivity of science. To find a viable alternative we will need to examine why scientific objectivity is no longer common sense.

The process is already well underway. Towards the end of the last century, just too many things began to go wrong for science. First we discovered how mankind has been polluting the environment. And sometimes the pollution was worse when the science was the strongest. The first big pollution scare came in 1963 with Silent Spring, where the death of the songbirds was explained by their being poisoned with agricultural pesticides. Then we had the accidents in civil nuclear power. Of all industries this was the one most completely based on science. We might have expected that an industry created and run by scientists would not be vulnerable to sloppy workmanship and elementary blunders; but we were wrong. In both those cases, as in many others, the pattern was that even where science had defined the situation, something would unexpectedly go wrong, leading to an accident or disaster. Then science would be brought it for the attempt to understand the accident and to prevent its happening again. It was as if science was chasing after itself in the cleanup jobs, retrospectively correcting its own mistakes.

The public’s experience of values, priorities, choices and exclusions has come through debates on science in fields relating to health and the environment. For a very long time, supporters of ‘alternative energy’ have pointed to the vast disparity between the meagre funds doled out to them for research and development, and the huge sums still lavished on the moribund nuclear power industry. In medical research, patients’ groups have observed how the lion’s share of the resources, even those collected and allocated by charities, goes on that ‘basic’ research which someone hopes and claims will solve the problems of cause and cure of the disease. At the same time, research on the quality of treatments and of care is left on the margins. The reasons are plain: everyone hopes for a ‘magic bullet’ which will kill the pathogen that makes us sick. Also, that sort of research is also useful in building a career in the relevant research science. By contrast, treatment and care are the ‘soft’ sciences, in which there are no Nobel prizes. It doesn’t take much imagination to see how particular sets of values are built into the ruling criteria of quality in science.

Link: The Post-Normal Times – Putting Science into Context: Excerpts from The No-Nonsense Guide to Science.

Blaming “Human Error”

From BoingBoing (emphasis added):

A technician reformatting a hard drive at the Alaska Department of Revenue accidentally erased the back up drive. What really sucked though is that the tape backup of the backup turned out to be corrupted. Apparently, the cost to painstakingly restore the data from hardcopy was more than $220,000. Nobody was punished for the human error.

Link: Boing Boing: Accidental hard drive erasure cost Alaska $220k

Original story from CNN/AP: Oops! Technician’s error wipes out data for state fund

I don’t know the specifics of this case, but most of the time when the media reports "human error" as the cause of some data loss, plane crash, nuclear power plant accident or other technological calamity, further investigation reveals that bad design is a bigger factor. 

Don Norman, in The Design of Everyday Things, cites the common example of deleting files (and I am probably paraphrasing this badly).  Everyone has done this: you tell it to delete a file, the computer says "Are you sure?" and you immediately hit "Yes" without thinking.  Then you realize you meant "No".  The confirmation dialog is pretty much useless; we can’t switch contexts so quickly.  Our brain is still thinking "delete the file".  The solution is a better design: don’t really delete the file — put it in a trashcan/recycle bin.

People make mistakes all the time.  Computer systems need to be designed with this in mind.  In the Alaska data case, I would bet that the system made it dangerously easy for this to happen.  It’s also not uncommon for a backup to fail at the same time as the main system; mostly this happens because backups rarely get tested or used.

So, stop blaming the user!  Often they’re just a victim of bad design.

Other related reading: The Human Factor by Kim Vicente, Normal Accidents by Charles Perrow.

A related New Yorker cartoon "Human Error: Again"

Transhumanists and IQ

Here’s an interesting gaffe.  In the latest newsletter from the Institute for Ethics and Emerging Technologies (IEET), a group promoting radical human enhancement technologies (I subscribe just for kicks), there’s a chart comparing IQ to socio-economic status, from what’s obviously quite a biased perspective (think The Bell Curve) along with the following text:

Demonstrating the deadly feedback loop between socio-economic conditions and IQ. Cognitive enhancement will only fix one part of the loop, but it certainly can’t hurt.

I’d rather not reproduce the chart, but you can find it under March 15 at the IEET’s blog.  You can also find it at one of SEED magazine’s ScienceBlogs, Omni Brain: Why IQ Matters – a graph, which is apparently where the IEET got it from.

Neither blog cites the source of the graph, but one of Omni Brain’s readers determined that it’s the work of Linda Gottfredson, a defender of The Bell Curve, who (according to Wikipedia) "advocates the innate intellectual inferiority of African Americans."

Steve Higgins of Omni Brain says he posted the chart to be ironic, though he admits he didn’t know where it came from.  The IEET’s post shows no trace of irony, but with their ultra-left/libertarian "cyborg democracy" politics I assume they would not support Gottfredson’s racist work and that this got posted without much thought.  Without that in mind, it would just look to a naive observer (i.e., a student) that they’re offering up this chart as scientific fact.

Of course, this is the web… what would it be without snippets of junk floating all over out of context?