Tuesday, September 9, 2008

Consequence of Knowledge

When reading this article I was struck by this paragraph and I think it best sums up the argument of this article. J. Robert Oppenheimer said. “It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences” (13).

The most profound consequence, if you could call it that, is that of a potential takeover of the world by the “superior robots” or an accidental or intentional release of some toxic substance whether it would be a nanotechnology device or some type of biological weapon that obliterates all life of Earth.

The analogy he makes to describe the robot takeover is profound “In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials” (3). This statement by itself doesn’t seem too bad without knowing the background of it. However, the placentals from North America had a superior competitive advantage over the marsupials and eventually caused them to become extinct after driving them further and further away. It gets more complex than that; Joy goes on to argue that it’s our continuing development that may lead us to the robot that is more superior to us therefore causing the human race to be the marsupials and robots to be the placentals.

I feel less cause for concern for the robot takeover because robots are not as advanced as biological weapons but maybe in the near future this may become a greater threat. Alternatively I think he has a very valid and urgent cause for concern when it comes to the biological and genetics part of his argument especially when it comes to the point of Drexler’s “gray goo” (10).

I feel he is credible because he is a co-founder of Sun Microsystems and the computer as we know it would be vastly different if it weren’t for him. He is responsible for some of the internet programs we use today like Java. Even so, a lot of the things he writes about are not in his own field of work but when he talks of fields of study other than his own he uses the work of other reputable scientists. He seems to just have compiled ideas from them and interpreted and summarized them.

I think the response to Joy should be for the scientific community recognize where to stop their research if their work could become dangerous and uncontrollable, but the hard part to that is knowing exactly when their work may become too dangerous and as a scientist one may become blinded as to the danger of their own work. Also, who is to say what is dangerous and what is not, the government? It also brings to bear the question of intellectual property rights. And who is going to stop a rogue scientist who continues their dangerous work in a private, maybe secret setting. In relating to Oppenheimer’s quote -- when do the consequences become too great to continue the spread of knowledge?

4 comments:

Jake The Snake said...
This comment has been removed by the author.
Jake The Snake said...

Alright, I found this entry to be comprehensive and understandable, hitting upon the issues at hand, in which we take a side either for or against Joy's writings. That being said, there could have been one or two expansions made on a couple of points involving the robotic takeover deal.

Now, I've had a chance to read into the Gray Goo Scenario, but not everyone has in the world. That can be allowed to slide, but I could've done with a 'for instance' on how the human race could destroy itself more than the vague statement about nanotechnology and toxic substances. Using a few movie or book-related parallels to drive the point home would suffice. Or even better yet, read George Orwell's book, "1984", and draw some scary conclusions from that about present-day life.

This isn't a BAD response. I just feel it's got a few vague areas, especially in the dealing with the scientists issue. As in the past, people will always use science and the pursuit of knowledge as an excuse to do anything out of the ordinary which may or may not levee good results. As I see it, the trouble is with allowing people, the human factor, to determine that the end result of a new development should be used negatively, or that the consequences of that new thing - whatever they are - should be ignored.

Therefore, I propose that development whose ends cannot justify the means - or at least clean up after itself, retrospectively - should be halted before its rampant application until such a time as our ability to cope with it catches up. Progress is a good thing, but unchecked development that we can barely contain the damage of is pure idiocy.

Mathew said...

When reading this article I was struck by this paragraph and I think it best sums up the argument of this article. J. Robert Oppenheimer said. “It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences” (13).

The most profound consequence, if you could call it that, is that of a potential takeover of the world by the “superior robots” or an accidental or intentional release of some toxic substance whether it would be a nanotechnology device or some type of biological weapon that obliterates all life of Earth.

The analogy he makes to describe the robot takeover is profound “In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials” (3). This statement by itself doesn’t seem too bad without knowing the background of it. However, the placentals from North America had a superior competitive advantage over the marsupials and eventually caused them to become extinct after driving them further and further away. It gets more complex than that; Joy goes on to argue that it’s our continuing development that may lead us to the robot that is more superior to us therefore causing the human race to be the marsupials and robots to be the placentals.

I feel less cause for concern for the robot takeover because robots are not as advanced as biological weapons but maybe in the near future this may become a greater threat. Alternatively I think he has a very valid and urgent cause for concern when it comes to the biological and genetics part of his argument especially when it comes to the point of Drexler’s “gray goo” (10). My favorite example of this is the Borg from Star Trek, for those who are unfamiliar, the Borg are a “race” of half biological, half artificial life form that strive to make all life in the galaxy part of their “collective.”

I feel he is credible because he is a co-founder of Sun Microsystems and the computer as we know it would be vastly different if it weren’t for him. He is responsible for some of the internet programs we use today like Java. Even so, a lot of the things he writes about are not in his own field of work but when he talks of fields of study other than his own he uses the work of other reputable scientists. He seems to just have compiled ideas from them and interpreted and summarized them.

I think the response to Joy should be for the scientific community recognize where to stop their research if their work could become dangerous and uncontrollable, but the hard part to that is knowing exactly when their work may become too dangerous and as a scientist one may become blinded as to the danger of their own work. Also, who is to say what is dangerous and what is not, the government? It also brings to bear the question of intellectual property rights. And who is going to stop a rogue scientist who continues their dangerous work in a private, maybe secret setting. In relating to Oppenheimer’s quote -- when do the consequences become too great to continue the spread of knowledge?

Adam Johns said...

Jake - it's good for your response to focus on the argument and on possible new formulations of it, but I feel like you pushed that tendency a little *too* far away from what Matt was doing.

Mathew -

The first several paragraphs feel disconnected to me. It’s fine to start out by focusing on Oppenheimer, but you rapidly shift to unpacking Joy’s metaphor about marsupials and placentals. Your discussion of the metaphor is good, but I’m not sure where you’re going with it - are you simply repeating Joy in more detail, or do you have something to add to his argument?

In the following paragraph you shift topics again, to the Borg and grey goo. Again, this isn’t a bad starting point - but what you’re doing is (to exaggerate slightly) picking a new, more or less random topic from Joy with each paragraph. I see nothing like an argument yet.

Your discussion of Joy’s credibility is fine in isolation, but it has no context. How is it supposed to fit in with some larger discussion or argument?

Your final paragraph - about the possible solutions - is so broad and vague as to have effectively no content. Where you are posing seemingly rhetorical questions, e.g., “Also, who is to say what is dangerous and what is not, the government?” you should really be providing answers.

Short version: this reads like a collection of more or less random thoughts on Joy, with no particular direction, and certainly no clear argument.