By Jordan Carroll

“We are on a threshold of a change in the universe comparable to the transition from nonlife to life.” – Hans Moravec, founder of the robotics department at Carnegie Mellon University

According to many researchers, less than a century from now, we will develop some brand of computer equal to us in intelligence. Rudimentary consciousness might develop within the machine or, if not, a human mind could be scanned and uploaded into the computer, effectively creating a simulacrum of man. It would then “think” and “act” in the same way as a human being, perhaps even going so far as to emulate human feelings. The resultant entity, however “human,” might at first be treated as a sub-animal, a nonentity, or, at best, a slave.

This new being would not remain our “slave” for long. With the mind of a human and the capabilities of a computer, it would effectively become our competitor. In order to keep up with this new species, and in order to get an edge on other humans, many would “merge” with their machines. Our descendents might augment their brains with cyborganic implants, with certain advanced drugs, or through other high-tech means. However extraordinary this may seem, it is not simply the speculation of science fiction authors. The tools–nanotechnology, genetic engineering, neurology, quantum computing–are being developed. Eventually these people would become more artificial than natural, effectively becoming “post-human.” At this point, their rights might be called into question. Some might consider them inanimate objects and thus attempt to deprive them of what is rightfully theirs. In the name of progress and transcendence, it would be as important to protect the rights of post-humans as it is to protect our own.

There would be some resentment towards post-humanity. Violence against technology is nothing new. In the late 1700s, the Luddites began a backlash against machines that lasted through the first half of the 19th century. Their crimes were so widespread that British Parliament made the destruction of certain devices punishable by death. Artificial intelligence and post-humanity might, in the future, become subject to similar attacks. The post-humans would be stronger, smarter, and more efficient than typical Homo sapiens, but they will not be “natural.” Discontents and Neo-Luddites will undoubtedly attempt to thwart their creation and, after their advent, hinder their further development.

In this more detached time, then, we should set the basic principles to govern both augmented humanity and cybernetic entities. We should, first and foremost, consider them equal to humanity. They should have every right and responsibility a human being has. Then, the definition of a human being broadens beyond genetic boundaries to include all self-aware, self-preserving entities of sufficient complexity. This central idea will prevent the degradation of both mankind and post-humanity. If we do not treat them in this way, as equals, why should we expect them to be as gracious to us? If both parties respect each other as fellow conscious entities, the nightmare scenarios of reactionary science fiction will never occur.

The implications of these rights and responsibilities are more difficult to consider. The destruction of a sufficiently advanced machine, I think, should be considered “murder,” and equal to the murder of a human. The problem, though, would be determining at what point does a machine become “sufficiently advanced.” Certain Turing tests would have to be developed to determine true consciousness in a machine. It is possible that we might be fooled by certain machines that only appear to be conscious, yet are not. The consideration of these issues–issues of the mind and the definition of a self-aware being–may become far more important than ever before.

The beginning of post-humanity would not be the end of mankind, but the liberation of the human spirit. We would no longer be chained to the limits of evolution, biology, and circumstance. Our destinies would be, for the first time in history, entirely our own. Man’s years would increase to centuries. All things would eventually become potential. Even the boundary between one individual and the next may some day be broken. Man would become overman.

Though there are many moral ambiguities involved, this potential technology is not inherently evil. A thinking computer or a clone does not, in and of itself, assume some faceless Orwellian society. Our path of innovation could just as easily lead to Omega Point, an eternal world of infinite thought and possibility, as it could to some hysterical Brave New World. We cannot, however, reach our goals without first considering the ethical questions involved. We must match our advances in technology with advances in morality.