Megan Schwemer
Narrative and Technology
Blog 1
People have harbored a fear of new knowledge throughout the ages. However, most of the fears and dreams people harbor about possible new technologies belong more in the realm of science fiction than science. This is not a new phenomenon by any means. When Frankenstein was written, the idea of reanimating a corpse or creating some sort of flesh golem with electricity was probably considered plausible. More recently, the 50s and 60s promised us flying cars. The disasters caused by the three main things – genetics, nanotechnology, and robotics – that Joy discusses in his essay are equally unlikely for a variety of reasons.
First, genetics. Genetic engineering of plants and animals gets a very bad rap. Opposition to it is mostly based on the as yet unproven idea that genetically engineered food is somehow unhealthy. If anything, genetically-modified food should be embraced more heartily, not shunned. Furthermore, genetic engineering is much safer than other alternatives – such as crossbreeding with toxic plants or bombarding plants with radiation to see what useful mutations they might develop. Also, the idea of scientists creating completely new organisms is probably impossible. Current knowledge of genetics is by no means sophisticated enough to allow us to create an entirely new species’ DNA from scratch. Great care is also taken in genetic engineering. Many things which are possible, such as engineering plants to produce pharmaceutical drugs, have not been undertaken as of yet due to the risks involved. With all these things being the case, I do not think that genetic modification will lead to the end of the world.
Second, nanotechnology. Joy’s ‘gray goo’ scenario depends on a variety of factors. Number one, these ‘nanites’ (if I may borrow the term from science fiction tv shows) would have to reproduce uncontrollably. If they were found not to replicate, then there would be little reason to fear the planet being overrun by them. Number two, even if they did replicate they would have to escape into the environment. Considering the amount of precautions that would probably be taken in developing something like this, such a possibility is also not a foregone conclusion. Number three, these nanites would have to overtake biological organisms. A lump of nanites would not necessarily threaten all life on the planet. These nanites would have to somehow develop into more complex structures and compete with biological organisms for resources, and win. This requires some degree of evolution and adaptation, which, unless they were programmed to do so seems unlikely. This scenario then, belongs more to sci-fi shows like Stargate than to reality.
Finally, robotics. This is the most unlikely of Joy’s doomsday predictions. A man versus machine apocalypse is unlikely for a whole variety of reasons. First, intelligent robots would have to be developed, and used widely. A robot, by definition cannot have free will or real intelligence because it is programmed by fallible humans, and thus cannot surpass its own programming, or them. Furthermore, processing power does not equal intelligence. Even if a quantum computer were developed, and super-fast processors put into such robots, the ability to process data quickly would not make them intelligent. As for mass-development, this is unlikely as well; such advanced robots would probably be prohibitively expensive for all but the super-rich. The idea of humans replacing themselves with robots is also unlikely. Downloading one’s consciousness into a robot, is probably impossible, considering the complexity of the human brain. Even if it were possible, would it be a way to be immortal, or would the robot simply be a copy, someone who acts like you though you are still dead? Furthermore, who would want to inhabit a body that has no pulse, no need to breathe or eat or sleep, no ability to taste or touch or smell? And if people today are unwilling to prolong their lives with respirators and feeding tubes, why do we think they would want to with robots? It just is not feasible. The interaction of man and machine will probably go as far as robotic prosthetics, and end there.
In conclusion, the possibility of mankind destroying itself with its own technology is remote at worst. While we are fairly skilled at finding new methods of destruction, if our demise is to be at our own hand, it is likely to be something more conventional, like biological or nuclear weapons, than the fantastical scenarios that Joy describes in his essay.
4 comments:
When writing critically about an article, it’s helpful to first summarize the author’s main ideas in a single sentence. The author’s thesis will give you a solitary point to dispute as well as a focal point. For this specific article, my version of Bill Joy’s point would be this:
Bill Joy believes that uncontrolled replication of technologies able to out-compete their creators spells D-O-O-M for the human race.
Beginning with the first paragraph, make your thesis a bit shorter and a bit more succinct. You mention Frankenstein in the fourth sentence, and reference to the story is absent from the remainder of the critique. It can make a paper much more interesting to have a side-by-side analogy, or even references to popular culture that will keep the reader’s attention. However, mentioning Frankenstein in passing only once left wide-open spaces in the potential analogy.
In your critique, you focused briefly on genetic engineering in agriculture, which Joy did not discuss. As for the postulate that current knowledge doesn’t support Joy’s argument, he admits that himself and refers, rather, to the rapid evolution of today’s technology taking a ride into the danger zone. Because the preceding paragraph offered no proof that any of your criticisms held weight, the last sentence leaves a sour taste in the reader’s mouth.
The above can go to describe all three of your ‘body’ paragraphs. Your nanotechnology argument is based on a series of assumptions that are not argued for, against, or otherwise. The assumptions are simply stated and left to prove that the scenario ‘belongs more to sci-fi shows like Stargate than to reality.’ You say, “These nanites would have to somehow develop into more complex structures and compete with biological organisms for resources, and win.” To begin, the somehow is out of place… it gives the feel of a rant, rather than a good-natured, scholarly critique. The more complex biological structures are not always the winners, the more efficient structures are the winners. Cabbage is actually much more complex than human genomes. However, Bill Joy does mention that plants whose leaves are as efficient as solar panels (which would render them inedible) would easily destroy today’s edible plants and potentially cause a serious food-chain disaster. A simple Darwinian trick, straight from the books… you are quite the weakest of the links, and so farewell to you.
In the fourth paragraph you miss Bill Joy’s point entirely when you propose that robots would have to be used widely. His thesis was that the robots would self-replicate, not be mass-produced and shipped to every corner of the globe. And to your questions, “Why do we think they (people) would want to (prolong their lives) with robots?”, I have three answers: laziness, greed, and arrogance. And, for the record, quite a few people live for years on respirators and feeding tubes. ((((Time Warp 2003)))) Terri Schiavo remained in a severely compromised neurological state and was provided a PEG tube to ensure the safe delivery of nourishment and hydration. ((((Return to Now)))) This case got national attention and drew rays of fire from the eyes of pro-lifers across the states. People DO try to live as long as humanly (or machinely) possible. PS Robotic prosthetics already exist. They are pretty incredible, measuring the shifts in pressure enough for its user to walk with almost no noticeable trouble. They allow skateboarding, running, jumping and general goofing around. It’s quite unlikely that this kind of science will simply cease to expand its base of knowledge. Until the blind can see, we will continue to develop machine-man meshes.
There is a piece of gold in his paragraph, that I also thought while I read Joy’s article. “Processing power does not equal intelligence.” You speak the truth here!
The conclusion is hasty and contradictory. Biological and nuclear weapons are our own technology and could quite possibly destroy the human race.
Trica - It's hard to follow up on an entire class' thoughts - you did well here.
Megan - The introduction, as in the first version, still displays at least a modest tendency toward overgeneralization - focusing more narrowly from the very beginning would have been desirable, although you may do very well without doing so.
Your critique of Joy in the second paragraph is interesting. I appreciate both the close reading and the research. My take on Joy is that he is trying to think decades into the future, with a mind conditioned by the kind of rapid advancement that has characterized computer hardware. I could argue that you're mistaking a long-term argument for a short-term one, but you, in turn, might assert the illegitimacy of long-term predictions. It's good material, but I think imperfectly engaged with Joy's style of futurism.
The next paragraph lacks the detail of the previous one. As an aside, I'll point out that a very prominent genetic engineering professor believes that the multiple species outcome is very likely: see Lee Silver's Remaking Eden on this subject. Your premise is that liberal democracy (including, e.g., equality under the law) can survive substantial genetic engineering; Joy disagrees, and considers liberal democracy to be profoundly threatened by genetic engineering. See also Bill McKibben's Enough on this subject. Here's my point: you are right that Joy is highly imperfect and lacks detail on these subjects - on the other hand, so do you.
Good but overly brief discussion of transgenic plants - Joy, of course, is arguing (by analogy to software and hardware engineering) that the safeguards, in self-replicating systems, may be inadequate.
Your discussion of a "White Plague," is simply very brief.
Overall: You have added some good research and some good-but-limited close readings of Joy. This is an interesting and intellectually engaged critique of Joy. It's also, while being much more focused than the last version, not terribly focused - any one of your body paragraphs could have been expanded to be the whole paper. The benefit here would have been a closer, more detailed engagement with the complexities of Joy's ideas - you are critiqueing moments in his essay, but you're also ripping them out of their complex context. That doesn't mean that you're wrong, by any means - just that this still somewhat scattered approach isn't as convincing as it could be.
Post a Comment