Wednesday, January 14, 2009

Megan Schwemer
Narrative and Technology
Blog 1

People have harbored a fear of new knowledge throughout the ages. However, most of the fears and dreams people harbor about possible new technologies belong more in the realm of science fiction than science. This is not a new phenomenon by any means. When Frankenstein was written, the idea of reanimating a corpse or creating some sort of flesh golem with electricity was probably considered plausible. More recently, the 50s and 60s promised us flying cars. The disasters caused by the three main things – genetics, nanotechnology, and robotics – that Joy discusses in his essay are equally unlikely for a variety of reasons.

First, genetics. Genetic engineering of plants and animals gets a very bad rap. Opposition to it is mostly based on the as yet unproven idea that genetically engineered food is somehow unhealthy. If anything, genetically-modified food should be embraced more heartily, not shunned. Furthermore, genetic engineering is much safer than other alternatives – such as crossbreeding with toxic plants or bombarding plants with radiation to see what useful mutations they might develop. Also, the idea of scientists creating completely new organisms is probably impossible. Current knowledge of genetics is by no means sophisticated enough to allow us to create an entirely new species’ DNA from scratch. Great care is also taken in genetic engineering. Many things which are possible, such as engineering plants to produce pharmaceutical drugs, have not been undertaken as of yet due to the risks involved. With all these things being the case, I do not think that genetic modification will lead to the end of the world.

Second, nanotechnology. Joy’s ‘gray goo’ scenario depends on a variety of factors. Number one, these ‘nanites’ (if I may borrow the term from science fiction tv shows) would have to reproduce uncontrollably. If they were found not to replicate, then there would be little reason to fear the planet being overrun by them. Number two, even if they did replicate they would have to escape into the environment. Considering the amount of precautions that would probably be taken in developing something like this, such a possibility is also not a foregone conclusion. Number three, these nanites would have to overtake biological organisms. A lump of nanites would not necessarily threaten all life on the planet. These nanites would have to somehow develop into more complex structures and compete with biological organisms for resources, and win. This requires some degree of evolution and adaptation, which, unless they were programmed to do so seems unlikely. This scenario then, belongs more to sci-fi shows like Stargate than to reality.

Finally, robotics. This is the most unlikely of Joy’s doomsday predictions. A man versus machine apocalypse is unlikely for a whole variety of reasons. First, intelligent robots would have to be developed, and used widely. A robot, by definition cannot have free will or real intelligence because it is programmed by fallible humans, and thus cannot surpass its own programming, or them. Furthermore, processing power does not equal intelligence. Even if a quantum computer were developed, and super-fast processors put into such robots, the ability to process data quickly would not make them intelligent. As for mass-development, this is unlikely as well; such advanced robots would probably be prohibitively expensive for all but the super-rich. The idea of humans replacing themselves with robots is also unlikely. Downloading one’s consciousness into a robot, is probably impossible, considering the complexity of the human brain. Even if it were possible, would it be a way to be immortal, or would the robot simply be a copy, someone who acts like you though you are still dead? Furthermore, who would want to inhabit a body that has no pulse, no need to breathe or eat or sleep, no ability to taste or touch or smell? And if people today are unwilling to prolong their lives with respirators and feeding tubes, why do we think they would want to with robots? It just is not feasible. The interaction of man and machine will probably go as far as robotic prosthetics, and end there.

In conclusion, the possibility of mankind destroying itself with its own technology is remote at worst. While we are fairly skilled at finding new methods of destruction, if our demise is to be at our own hand, it is likely to be something more conventional, like biological or nuclear weapons, than the fantastical scenarios that Joy describes in his essay.


tricia said...

When writing critically about an article, it’s helpful to first summarize the author’s main ideas in a single sentence. The author’s thesis will give you a solitary point to dispute as well as a focal point. For this specific article, my version of Bill Joy’s point would be this:

Bill Joy believes that uncontrolled replication of technologies able to out-compete their creators spells D-O-O-M for the human race.

Beginning with the first paragraph, make your thesis a bit shorter and a bit more succinct. You mention Frankenstein in the fourth sentence, and reference to the story is absent from the remainder of the critique. It can make a paper much more interesting to have a side-by-side analogy, or even references to popular culture that will keep the reader’s attention. However, mentioning Frankenstein in passing only once left wide-open spaces in the potential analogy.

In your critique, you focused briefly on genetic engineering in agriculture, which Joy did not discuss. As for the postulate that current knowledge doesn’t support Joy’s argument, he admits that himself and refers, rather, to the rapid evolution of today’s technology taking a ride into the danger zone. Because the preceding paragraph offered no proof that any of your criticisms held weight, the last sentence leaves a sour taste in the reader’s mouth.

The above can go to describe all three of your ‘body’ paragraphs. Your nanotechnology argument is based on a series of assumptions that are not argued for, against, or otherwise. The assumptions are simply stated and left to prove that the scenario ‘belongs more to sci-fi shows like Stargate than to reality.’ You say, “These nanites would have to somehow develop into more complex structures and compete with biological organisms for resources, and win.” To begin, the somehow is out of place… it gives the feel of a rant, rather than a good-natured, scholarly critique. The more complex biological structures are not always the winners, the more efficient structures are the winners. Cabbage is actually much more complex than human genomes. However, Bill Joy does mention that plants whose leaves are as efficient as solar panels (which would render them inedible) would easily destroy today’s edible plants and potentially cause a serious food-chain disaster. A simple Darwinian trick, straight from the books… you are quite the weakest of the links, and so farewell to you.

In the fourth paragraph you miss Bill Joy’s point entirely when you propose that robots would have to be used widely. His thesis was that the robots would self-replicate, not be mass-produced and shipped to every corner of the globe. And to your questions, “Why do we think they (people) would want to (prolong their lives) with robots?”, I have three answers: laziness, greed, and arrogance. And, for the record, quite a few people live for years on respirators and feeding tubes. ((((Time Warp 2003)))) Terri Schiavo remained in a severely compromised neurological state and was provided a PEG tube to ensure the safe delivery of nourishment and hydration. ((((Return to Now)))) This case got national attention and drew rays of fire from the eyes of pro-lifers across the states. People DO try to live as long as humanly (or machinely) possible. PS Robotic prosthetics already exist. They are pretty incredible, measuring the shifts in pressure enough for its user to walk with almost no noticeable trouble. They allow skateboarding, running, jumping and general goofing around. It’s quite unlikely that this kind of science will simply cease to expand its base of knowledge. Until the blind can see, we will continue to develop machine-man meshes.

There is a piece of gold in his paragraph, that I also thought while I read Joy’s article. “Processing power does not equal intelligence.” You speak the truth here!

The conclusion is hasty and contradictory. Biological and nuclear weapons are our own technology and could quite possibly destroy the human race.

Megan Schwemer said...
This comment has been removed by the author.
Megan Schwemer said...

People have harbored a fear of new knowledge throughout the ages. However, most of the fears and dreams people harbor about possible new technologies belong more in the realm of science fiction than science. This is not a new phenomenon by any means. When Frankenstein was written, the idea of reanimating a corpse or creating some sort of flesh golem with electricity was probably considered plausible. More recently, the 50s and 60s promised us flying cars. The disasters caused by genetic engineering that Joy mentions in his essay are highly unlikely and show that he is relatively uninformed on the subject.

Joy begins his very short section on the dangers of genetic engineering by mentioning the benefits that genetic engineering offers. Some of the things he mentions, such as curing disease and improving crops are already being researched and implemented. For example, mice are being used as models in treating Alzheimer’s and other disorders [1,2]. The possibilities of treating diseases with transgenic models and plants to produce the drugs are virtually limitless, and we have only begun to realize them. Other things he mentions betray a lack of knowledge on the subject, saying that genetic engineering promises “to create tens of thousands of novel species of bacteria, plants, viruses, and animals” and “to replace reproduction, or supplement it, with cloning”. If by ‘novel species’ he means transgenic varieties, then yes, it is likely. If, however, he means completely new organisms, then he is wrong. While a couple new species might be produced for research purposes, there is little conceivable reason that so many creatures would be produced, either now or in the future. As for cloning, it does not seek to replace human or animal reproduction in the near or far future. It is much too complicated and costly for such use (and even if costs were reduced, there would be little reason to use it in place of conventional animal breeding). Cloning instead would likely be used in animal breeding, i.e. for cloning prize bulls.

His next idea, that mankind might by means of genetic engineering separate into “several separate and unequal species” and thus undermine democracy is equally flawed. First, to truly realize such a dystopia, the different human species would have to be unable to reproduce together (which is more unlikely than it sounds, considering that even such distinct species such as lions and tigers are capable of reproduction). Second, democratic equality is not based upon genetic equality. Different people are not equal athletically or intellectually, but they are nonetheless considered equal under the law. Even people with genetic disorders are considered equal.

Joy’s next point, about a friend’s editorial about how this ‘new botany’ is selecting plants based on profitability rather than evolutionary viability is a non-issue. Humans have been selecting plants for profit long before the advent of genetic engineering. Whether we choose plants through selective breeding or genetic engineering, we guide their development, and such an argument is several millennia too late. As for the possibility of transgenic plants or other organisms escaping into the wild, measures are already being undertaken to prevent such occurrences [3]. Transgenic organisms are being engineered with specific weaknesses, so that they cannot spread into the wild unchecked.

Joy’s last concern, about the creation of a so-called ‘White Plague’ that is engineered to target specific groups is also a near-impossibility. Even now, detectors are being developed to detect genetically-modified pathogens [4]. So even if such a disease were to be produced, by that time there would already be detectors and counter-measures in place. So then, a genetically modified super-disease is not an inevitability by any means.
In conclusion, Joy’s fears about genetic engineering stem from his lack of knowledge on the subject. With counter-measures being put into place, there is no reason he or anyone else should lie awake at night out of fear of the human race being destroyed by some genetically-modified super plague.


1. Genetic Engineering Cures Mice Of Brain Disorder. 12 February 2007. 20 January 2009

2. Allen, Mary Emma. Genetic Engineering in Mice May Aid Alzheimer’s Research. 30 May 2007. 20 January 2009

3. New Strategy To Prevent Genetically Altered Rice From Uncontrolled Spreading. 21 March 2008. 20 January 2009

4. On The Trail Of Rogue Genetically Modified Pathogens. 18 March 2008. 20 January 2009

Adam Johns said...

Trica - It's hard to follow up on an entire class' thoughts - you did well here.

Megan - The introduction, as in the first version, still displays at least a modest tendency toward overgeneralization - focusing more narrowly from the very beginning would have been desirable, although you may do very well without doing so.

Your critique of Joy in the second paragraph is interesting. I appreciate both the close reading and the research. My take on Joy is that he is trying to think decades into the future, with a mind conditioned by the kind of rapid advancement that has characterized computer hardware. I could argue that you're mistaking a long-term argument for a short-term one, but you, in turn, might assert the illegitimacy of long-term predictions. It's good material, but I think imperfectly engaged with Joy's style of futurism.

The next paragraph lacks the detail of the previous one. As an aside, I'll point out that a very prominent genetic engineering professor believes that the multiple species outcome is very likely: see Lee Silver's Remaking Eden on this subject. Your premise is that liberal democracy (including, e.g., equality under the law) can survive substantial genetic engineering; Joy disagrees, and considers liberal democracy to be profoundly threatened by genetic engineering. See also Bill McKibben's Enough on this subject. Here's my point: you are right that Joy is highly imperfect and lacks detail on these subjects - on the other hand, so do you.

Good but overly brief discussion of transgenic plants - Joy, of course, is arguing (by analogy to software and hardware engineering) that the safeguards, in self-replicating systems, may be inadequate.

Your discussion of a "White Plague," is simply very brief.

Overall: You have added some good research and some good-but-limited close readings of Joy. This is an interesting and intellectually engaged critique of Joy. It's also, while being much more focused than the last version, not terribly focused - any one of your body paragraphs could have been expanded to be the whole paper. The benefit here would have been a closer, more detailed engagement with the complexities of Joy's ideas - you are critiqueing moments in his essay, but you're also ripping them out of their complex context. That doesn't mean that you're wrong, by any means - just that this still somewhat scattered approach isn't as convincing as it could be.