Narrative and Technology
People have harbored a fear of new knowledge throughout the ages. However, most of the fears and dreams people harbor about possible new technologies belong more in the realm of science fiction than science. This is not a new phenomenon by any means. When Frankenstein was written, the idea of reanimating a corpse or creating some sort of flesh golem with electricity was probably considered plausible. More recently, the 50s and 60s promised us flying cars. The disasters caused by the three main things – genetics, nanotechnology, and robotics – that Joy discusses in his essay are equally unlikely for a variety of reasons.
First, genetics. Genetic engineering of plants and animals gets a very bad rap. Opposition to it is mostly based on the as yet unproven idea that genetically engineered food is somehow unhealthy. If anything, genetically-modified food should be embraced more heartily, not shunned. Furthermore, genetic engineering is much safer than other alternatives – such as crossbreeding with toxic plants or bombarding plants with radiation to see what useful mutations they might develop. Also, the idea of scientists creating completely new organisms is probably impossible. Current knowledge of genetics is by no means sophisticated enough to allow us to create an entirely new species’ DNA from scratch. Great care is also taken in genetic engineering. Many things which are possible, such as engineering plants to produce pharmaceutical drugs, have not been undertaken as of yet due to the risks involved. With all these things being the case, I do not think that genetic modification will lead to the end of the world.
Second, nanotechnology. Joy’s ‘gray goo’ scenario depends on a variety of factors. Number one, these ‘nanites’ (if I may borrow the term from science fiction tv shows) would have to reproduce uncontrollably. If they were found not to replicate, then there would be little reason to fear the planet being overrun by them. Number two, even if they did replicate they would have to escape into the environment. Considering the amount of precautions that would probably be taken in developing something like this, such a possibility is also not a foregone conclusion. Number three, these nanites would have to overtake biological organisms. A lump of nanites would not necessarily threaten all life on the planet. These nanites would have to somehow develop into more complex structures and compete with biological organisms for resources, and win. This requires some degree of evolution and adaptation, which, unless they were programmed to do so seems unlikely. This scenario then, belongs more to sci-fi shows like Stargate than to reality.
Finally, robotics. This is the most unlikely of Joy’s doomsday predictions. A man versus machine apocalypse is unlikely for a whole variety of reasons. First, intelligent robots would have to be developed, and used widely. A robot, by definition cannot have free will or real intelligence because it is programmed by fallible humans, and thus cannot surpass its own programming, or them. Furthermore, processing power does not equal intelligence. Even if a quantum computer were developed, and super-fast processors put into such robots, the ability to process data quickly would not make them intelligent. As for mass-development, this is unlikely as well; such advanced robots would probably be prohibitively expensive for all but the super-rich. The idea of humans replacing themselves with robots is also unlikely. Downloading one’s consciousness into a robot, is probably impossible, considering the complexity of the human brain. Even if it were possible, would it be a way to be immortal, or would the robot simply be a copy, someone who acts like you though you are still dead? Furthermore, who would want to inhabit a body that has no pulse, no need to breathe or eat or sleep, no ability to taste or touch or smell? And if people today are unwilling to prolong their lives with respirators and feeding tubes, why do we think they would want to with robots? It just is not feasible. The interaction of man and machine will probably go as far as robotic prosthetics, and end there.
In conclusion, the possibility of mankind destroying itself with its own technology is remote at worst. While we are fairly skilled at finding new methods of destruction, if our demise is to be at our own hand, it is likely to be something more conventional, like biological or nuclear weapons, than the fantastical scenarios that Joy describes in his essay.