The writing that comes out of the "transhumanist" crowd often baffles me. I’m not really equipped to follow most high-level philosophy discussions, so maybe it’s just me, or maybe I’m right and there’s a large dose of bluster and nonsense mixed into it. Here are some recent offerings that seem interesting, if a bit opaque.
The Future of Humanity Institute has launched the Overcoming Bias blog. It is described as follows:
How can we better believe what is true? While it is of
course useful to seek and study relevant information, our minds are
full of natural tendencies to bias our beliefs via overconfidence,
wishful thinking, and so on. Worse, our minds seem to have a natural
tendency to convince us we that are aware of and have adequately
corrected for such biases, when we have done no such thing.
In this forum we discuss whether and how we might avoid this fate,
by spending a bit less effort on each specific topic, and a bit more
effort on the general topic of how to be less biased. Here we discuss
common patterns of bias and self-deception, statistical and other
formal analysis tools, computational and data-gathering aids, and
social institutions which may discourage bias and encourage its
correction. Other topics may be discussed to the extent they exemplify
important biases and correction issues.
(Via IEET, Sentient Developments.)
As best I can tell, this blog is not about bias in discussions about transhumanism, which was how I first read it. It’s about discussing how future humans might not be biased. That’s what I gather from the abstract to a paper by Robin Hansen called "Enhancing Our Truth Orientation", to which the site links:
Humans lie to themselves, and often choose beliefs for reasons other than how
closely those beliefs approximate truth. This is mainly why we disagree. Three future
trends may reduce this epistemic vice. First, increased documentation and surveillance
should make it harder to lie and self-deceive about the patterns of our lives. Second,
speculative markets can create a relatively unbiased consensus on most debated topics
in science, business, and policy. Third, brain modifications may allow our minds to be
more transparent, so that lies and self-deception become harder to hide. In evaluating
these trends, we should be wary of moral arrogance.
Nick Bostrom (also of the Future of Humanity Institute) has posted a paper called "Dignity and Enhancement," Commissioned for the President’s Council on Bioethics. The abstract reads:
Does human enhancement threaten our dignity as some have asserted? Or could our dignity perhaps be technologically enhanced? After disentangling several different concepts of dignity, this essay focuses on the idea of dignity as a quality (a kind of excellence admitting of degrees). The interactions between enhancement and dignity as a quality are complex and link into fundamental issues in ethics and value theory.
James Hughes and others from the same crowd are on a panel taking place at the UN to "discuss the impacts of emerging neurotechnologies on cognitive liberty." The meeting theme is described as follows:
Growing knowledge in the neurosciences, enhanced by exponential
advances in pharmacology and other neurotechnologies (technologies that
monitor and manipulate the brain) are rapidly moving brain research and
clinical applications beyond the scope of purely medical use. These
emerging neurotechnologies offer expanded intelligence, memory and
senses, giving us greater ability to understand and control our own
minds. But they also expand the avenues for possible coercion and
invasion of mental privacy. What is the state of cognitive liberty
today? What steps do we need to take to protect cognitive liberty,
mental privacy and freedom of choice in light of these