I just finished reading Sherry Turkle's new book, Alone Together: Why we expect more from technology and less from each other (book website, Amazon) and I can't recommend it highly enough. She reports on her research into how people experience social media and social robots, and asks many important questions about where we're headed. I found the second half of the book, on social media, more compelling than the first, on robots, though Turkle's analysis does bring the two topics together nicely.
There's an interesting article in the New York Times today about robotic teachers. An excerpt:
Researchers say the pace of innovation is such that these machines
should begin to learn as they teach, becoming the sort of infinitely
patient, highly informed instructors that would be effective in subjects
like foreign language or in repetitive therapies used to treat
developmental problems like autism.
Several countries have been testing teaching machines in classrooms.
South Korea, known for its enthusiasm for technology, is “hiring”
hundreds of robots as teacher aides and classroom playmates and is
experimenting with robots that would teach English.
Already, these advances have stirred dystopian visions, along with the
sort of ethical debate usually confined to science fiction. “I worry
that if kids grow up being taught by robots and viewing technology as
the instructor,” said Mitchel Resnick, head of the Lifelong Kindergarten
group at the Media Laboratory at the Massachusetts Institute of Technology, “they will
see it as the master.”
Most computer scientists reply that they have neither the intention, nor
the ability, to replace human teachers. The great hope for robots, said
Patricia Kuhl, co-director of the Institute for Learning and Brain
Sciences at the University
of Washington, “is that with the right kind of technology at a
critical period in a child’s development, they could supplement learning
in the classroom.”
I don't think you can fault the individual computer scientists' intentions here, and it may well be that robots offer unique value in certain special situations like working with autistic children. But I have to agree with those who find this trend disturbing. I don't think Resnick's worry about seeing robots "as the master" is the worst problem. Our society values technology more than it values teachers. These robots aren't solving a problem that couldn't be solved better with people. And down the road it's not hard to see the day when cheap robots become much more than just a supplement.
To repeat a quote I posted 5 years ago:
the end, it is the poor who will be chained to the computer; the rich
will get teachers."
Stephen Kindel, quoted by Todd Oppenheimer in The Flickering Mind: Saving Education From the False Promise of Technology.
From the New York Times:
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
examples, the scientists pointed to a number of technologies as diverse
as experimental medical systems that interact with patients to simulate
empathy, and computer worms and viruses that defy extermination and
could thus be said to have reached a “cockroach” stage of machine
While the computer scientists agreed that we are a
long way from Hal, the computer that took over the spaceship in “2001:
A Space Odyssey,” they said there was legitimate concern that
technological progress would transform the work force by destroying a
widening range of jobs, as well as force humans to learn to live with
machines that increasingly copy human behaviors.
— leading computer scientists, artificial intelligence researchers and
roboticists who met at the Asilomar Conference Grounds on Monterey Bay
in California — generally discounted the possibility of highly
centralized superintelligences and the idea that intelligence might
spring spontaneously from the Internet. But they agreed that robots
that can kill autonomously are either already here or will be soon.
A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.
John Markoff writes about AI, Ray Kurzweil, The Singularity, and other such things in a New York Times article. Excerpt:
writers and eccentric computer prodigies, is back in fashion and
getting serious attention from NASA and from Silicon Valley companies like Google
as well as a new round of start-ups that are designing everything from
next-generation search engines to machines that listen or that are
capable of walking around in the world. A.I.’s new respectability is
turning the spotlight back on the question of where the technologymight be heading and, more ominously, perhaps, whether computer intelligence will surpass our own, and how quickly. […]
Profiled in the documentary “Transcendent Man,”
which had its premier last month at the TriBeCa Film Festival, and with
his own Singularity movie due later this year, Dr. Kurzweil has become
a one-man marketing machine for the concept of post-humanism. He is the
co-founder of Singularity University,
a school supported by Google that will open in June with a grand goal —
to “assemble, educate and inspire a cadre of leaders who strive to
understand and facilitate the development of exponentially advancing
technologies and apply, focus and guide these tools to address
humanity’s grand challenges.”
Not content with the development of
superhuman machines, Dr. Kurzweil envisions “uploading,” or the idea
that the contents of our brain and thought processes can somehow be
translated into a computing environment, making a form of immortality
possible — within his lifetime.
raised eyebrows among hard-nosed technologists in the engineering
culture here, some of whom describe the Kurzweilian romance with
supermachines as a new form of religion. […]
will probably die, along with the rest of us not too long before the
‘great dawn,’ ” said Gary Bradski, a Silicon Valley roboticist. “Life’s
Link: The Coming Superbrain
to change not just how wars are fought, but also the politics,
economics, laws, and ethics that surround war itself. This upheaval is
already afoot — remote-controlled drones take out terrorists in
Afghanistan, while the number of unmanned systems on the ground in Iraq
has gone from zero to 12,000 over the last five years. But it is only
the start. Military officers quietly acknowledge that new prototypes
will soon make human fighter pilots obsolete, while the Pentagon
researches tiny robots the size of flies to carry out reconnaissance
work now handled by elite Special Forces troops.
Wired for War
takes the reader on a journey to meet all the various players in this
strange new world of war: odd-ball roboticists working in latter-day
“skunk works” in the midst of suburbia; military pilots flying combat
mission from their office cubicles outside Las Vegas; the Iraqi
insurgents who are their targets; journalists trying to figure out just
how to cover robots at war; and human rights activists wrestling with
what is right and wrong in a world where our wars are increasingly
being handed over to machines.
Update: Singer has an article in The New Atlantis, adapted from his book: Military Robots and the Laws of War.
Humans United Against Robots (HUAR) is a tongue-in-cheek campaign "designed to educate and aware the citizenry of the
world of the impending attack that computers and robots will put into
effect against humans." I like the art, if not the grammar.
HUAR is apparently a side project of web comedians Keith and the Girl.
I heard about it today when one of its members called in to an NPR Science Friday show about robots.
AI Panic! is a smart and funny blog by AI researcher and PhD student Robin Baumgarten:
What’s AI Panic?
This site is dedicated to research and unveil the perils, imminence
and probabilities of a hostile takeover of the world through artificial
intelligence. I will stay on the lookout for you and post articles,
research papers and break-throughs of everything that could affect this
Not me. Not yet, at least. And you probably shouldn’t, either. But staying alert and informed doesn’t hurt.
Video from the Technology in Wartime conference that took place a couple weeks ago is now online. I’ve been meaning to write up my notes from this but haven’t found the time.
It was an excellent event — CPSR did a great job of gathering some really impressive speakers. Particularly good were security expert Bruce Schneier, roboticist and ethicist Ronald Arkin, Benetech‘s Patrick Ball on human rights data analysis (see HRDAG), and Neil Rowe on the ethics of cyberweapons. So if you’re checking out the videos I recommend starting there. I left before the last session, which also promised a good line-up.
Link: Technology in Wartime Video.
Computer Professionals for Social Responsibility is hosting a one-day public conference on Technology in Wartime. It will take place January 26th at Stanford University. From the conference description:
This conference will explore how computer technology is used during
war — both for the purposes of combat/defense, as well as for human
rights interventions into war-torn regions. Topics will include high
tech weapons systems, cyberwarfare, autonomous aircraft, mobile robots,
internet surveillance, anonymous communication, and privacy-enhancing
technologies that aid human rights workers documenting conditions in
war-torn countries and help soldiers communicate their experiences in
blogs and e-mail.
Our goal will be to consider the ethical implications of wartime
technologies and how these technologies are likely to affect
civilization in years to come. Ultimately we want to engage a pressing
question of our time: What should socially-responsible computer
professionals do in a time of high tech warfare?
Participants will include technology experts, military
professionals, policy-makers, scholars, and human rights workers.
Confirmed speakers include Bruce Schneier (BT Counterpane Security),
Barbara Simons (ACM), Herb Lin (NSF), Cindy Cohn (EFF), Patrick Ball
(Benetech), Neil Rowe (US Naval Academy), Ronald Arkin (Georgia Tech),
and Noah Shachtman (Wired magazine’s war correspondent).
The proceedings will be broadcast live on the Web, and the
presentations collected in book form online, released under a CC
license, and made available to the public and policy makers looking for
expert opinions on wartime technology issues during the election year.
Link: Technology in Wartime.
I’m planning to attend and will write something about it here afterwards.
See also this post about it by conference organizer Annalee Newitz at io9: Will We Hold Robots Accountable for War Crimes?