Bill Joy’s article details the idea that “robots will eventually succeed us – that humans clearly face extinction” (3). The idea of robots taking over the world is by no means new. Science fiction works, have for quite some time alluded to this fate of the universe. The film I, Robot demonstrates a world where robots and humans coexist. The robots were created with three laws that would prevent them from harming humans and taking over. This; of course, goes a rye and the robots stop following the three rules. In the end the robots are fixed and all is well. This story is all too similar to the ideas that Bill Joy conveys in his article “Why the future doesn’t need us”. Stating quotes from psychopaths as well as professors, Joy makes it seem that it’s an almost inevitable possibility that the human race will decline or become extinct and some sort of robots or technology will take over the world. There of course would not be the happy ending with Will Smith saving the world with all robots going back to their original state following the three laws. Could this somewhat “Armageddon” of sorts be a legitimate possibility and if so is there anything we can do about it?
While determining whether or not Joy’s beliefs could even be a possibility we need to look at the basis of his ideas. He writes his article with his information coming from books written by professors, scientists, and even the “the Unabomber”. While one would automatically dispose of an idea from a psychopath the backing by several scholars makes you think the idea of robots taking over the world could be plausible. Still one can’t help but think the ideas are little farfetched. Simply quoting Murphy’s Law that “Anything that can go wrong, will” (2) doesn’t mean that there is truth to the statement. To make Joy’s prediction seem more educated he notes that “For decades, Moore’s law has correctly predicted the exponential rate of improvement of semiconductor technology…. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today” (6-7). One must still realize this is still just a prediction and in no means makes it a reality. Even if you don’t believe that this future of machines is evident, it’s still important to come up to a solution to this problem.
There is in no realm a simple solution to the problem of growing technology to a point of an intelligent machine. Although it seems easy enough to say not to let it happen; that just won’t do. I’ll admit that as an ordinary student I have no control over what technologies and advances are created. This leaves the responsibility to society as a whole. I would like to think that people wouldn’t allow computers to take over our minds and have robots have the capability of becoming the dominant species. If, however, this is a “gradual change that we would get used to” like the articles says (3) it would make it much harder to control. We would get too far and reliant on the technology to simply stop using it. It would be comparable to all of the world’s oil reserves drying up tomorrow. We aren’t prepared to not rely on oil if that were to occur. A major problem would be that the majority of people, in our country at least, would no longer be able to get to work without the vehicles. There are of course countless other problems that would plague us as well. Similarly if society got to a point where machines took over most if not all of humans jobs we wouldn’t be able to simply shut down the machines and live without them. The rules about nuclear weapons came after the first was dropped, thus in a way too late. Now several countries have access to them and there is always a threat of nuclear warfare that would have dire circumstances. This is why there would need to be laws and actions to prevent this change from occurring before it is too late and we can’t go back. There is still time to set a course that will not lead to robots taking over. We need to not completely rely on machines. It’s one thing to use the machines to do a job more efficiently, but the line gets crossed when people aren’t even needed to operate those machines. There should always still be a way to live without those machines (i.e. we should still be able to grow food, get water, reproduce, etc.) if one day we needed to cut all ties with them. Even though Joy’s article demonstrates little if any facts about what our future holds everyone needs to be aware that it is a possibility but not inevitable that robots could take over the world. If we, as the human race, don’t solely rely on the current and upcoming technologies that are meant to make life simpler we have the chance to make this premonition false.
3 comments:
I feel that this essay has some strong ideas early on but that they begin to get muddled the further it progresses. The opening seems to provide a strong basis that draws a good parallel between what Joy wrote and the film I, Robot. It presents the main ideas from both "Why the future doesn't need us" and I, Robot without going into unnecessary detail in regard to either. I also enjoyed the opening paragaph stylistically, as the question at the end still leaves the reader wondering whether the author is going to support or refute Joy's claims. While I could see the argument being made that the author's stance should be clearly stated in the opening paragraph, I personally found it to provide good motivation to keep reading.
However, I feel that the second paragraph begins to lose focus, as if even the author was unsure of which stance to take. The entire paragraph seems to be supporting an attack on Joy's words up until the very last sentence, which drastically shifts toward saying that people need to take heed of Joy's fears. If that's the case, then why? I think if the author is going to provide material supporting the idea that Joy's ideas are beyond reason, such as that he quotes Ted Kaczynski and claims Murphy's Law to be factual, then reasons also need to be given as to why Joy should be taken seriously regardless. I feel that a stronger basis for Joy's credibility would have improved the author's stance.
It seemed to me that the final paragraph, while providing some good ideas, strays from the prompt. The paragraph deals more heavily with what would happen in the event that Joy's predictions happened to come true than with a specific solution for that problem. The parallel between machines taking over and the world's oil supply running out nicely showcased the dire situation in which humanity would find itself, but just saying that we all need to prevent ourselves from becoming totally dependent upon technology isn't a very specific course of action. I think the author had a strong idea but simply needed to flesh it out more. I would like to see some ideas on how people could circumvent the issue of becoming too reliant upon technology and other specific actions that could prevent total dependence.
As a final note, I think this author has solid ideas is able to nicely fit a high volume of quotes from Joy's text into his own work. I simply think he needs to simply stay focused on the topic of the essay and supplement that focus with more detailed thoughts. On a minor grammatical note, I would also recommend reviewing the use of semicolons.
Bill Joy’s article details the idea that “robots will eventually succeed us – that humans clearly face extinction” (3). The idea of robots taking over the world is by no means new. Science fiction works, have for quite some time alluded to this fate of the universe. The film I, Robot demonstrates a world where robots and humans coexist. The robots were created with three laws that would prevent them from harming humans and taking over. This; of course, goes a rye and the robots stop following the three rules. In the end the robots are fixed and all is well. This story is all too similar to the ideas that Bill Joy conveys in his article “Why the future doesn’t need us”. Stating quotes from psychopaths as well as professors, Joy makes it seem that it’s an almost inevitable possibility that the human race will decline or become extinct and some sort of robots or technology will take over the world. There of course would not be the happy ending with Will Smith saving the world with all robots going back to their original state following the three laws. Could this somewhat “Armageddon” of sorts be a legitimate possibility and if so is there anything we can do about it?
Constantly you hear how anything’s possible as long as you set your mind to it. Although this doesn’t always seem to work out in the world of science many unimaginable ideas have come to be realities. Scientific discoveries have shaken people’s doubts of what our future holds. Bill Joy presents his fear of robots taking over by using this idea of anything being possible. Being a prominent somewhat god in the technological world Joy comes off as a man who knows what he’s talking about. If anyone would have the knowledge of the possibilities that technology holds it would be him. He fears that technology will advance to a point that we will no longer be able to control it and thus the demise of the human race will occur. He backs up these predictions will statements such as “For decades, Moore’s law has correctly predicted the exponential rate of improvement of semiconductor technology…. by 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today” (6-7). He also mentions the opinions of other highly respected individuals that have made contributions to the technologies of today. It really drives the point that with the rate that things are going there’s a potential for Joy’s predictions to unfortunately happen. With this ever possible threat of a mechanic take-over something needs to be done before it’s too late.
There is in no realm a simple solution to the problem of growing technology to a point of an intelligent machine. Joy attempted to come up with a resolution but in the end hasn’t really changed anything in his life to prevent such a catastrophe. Although it seems easy enough to say not to let it happen; that just won’t do. I’ll admit that as an ordinary student I have no control over what technologies and advances are created. This leaves the responsibility to society as a whole. I would like to think that people wouldn’t allow computers to take over our minds and have robots have the capability of becoming the dominant species. If, however, this is a “gradual change that we would get used to” like the articles says (3) it would make it much harder to control. We would get too far and reliant on the technology to simply stop using it. It would be comparable to all of the world’s oil reserves drying up tomorrow. We aren’t prepared to not rely on oil if that were to occur. Similarly if society got to a point where machines took over most if not all of humans jobs we wouldn’t be able to simply shut down the machines and live without them. That’s why precautions must be taken before-hand. The rules about nuclear weapons came after the first was dropped, thus in a way too late. Now several countries have access to them and there is always a threat of nuclear warfare that would have dire circumstances. There is still time to set a course that will not lead to robots taking over. Like those of nuclear weapons strict laws need to be imposed that prevent the creation of an “intelligent” machine. I’m not scientist or an engineer but with the help of the geniuses we have in the world strict guidelines that specifically list lines that shouldn’t be crossed shouldn’t be too hard to come up with. There needs to be a limit as to what we create can do because we don’t want one mistake to be the one that ends humanity. It would be pretty sad if our extinction was due to our constant need to better our lives. As a whole we need to not completely rely on machines. It’s one thing to use the machines to do a job more efficiently, but the line gets crossed when people aren’t even needed to operate those machines. There should always still be a way to live without those machines (i.e. we should still be able to grow food, get water, reproduce, etc.) if one day we needed to cut all ties with them. I think awareness is key in the prevention of a robotic takeover. The future of engineers and scientists need to be aware that they could create a monster without knowing it and need to take Joy’s fears as an eminent possibility if they don’t realize the repercussions of what they create. If we, as the human race, don’t solely rely on the current and upcoming technologies that are meant to make life simpler we have the chance to prosper without the threat of an unnecessary extinction.
Excellent feedback from John.
Your language is problematic here; you have enough errors (for instance, with semicolons) and enough problems with wordiness to at least occasionally distract the reader. I like the general topic, as you could probably tell from my comments in class, but you don’t develop an actual argument very quickly.
The long and meandering second paragraph covers some important material, but here’s the problem: you are still essentially introducing your topic – you finally have something approximating an argument at the *end* of the second paragraph, which is awfully late.
In the long and chaotic third section, you come close on several occasions to developing a genuinely interesting argument, but you inevitably pull back. Look at these lines as an example: “I’ll admit that as an ordinary student I have no control over what technologies and advances are created. This leaves the responsibility to society as a whole.” First, why do you take your own lack of control as a given? There are people, for instance, who reject a range of technologies – people who don’t own cars, who eat only organic food, who (in extreme cases) try to live off the land or even off the city (I’m thinking of freegans here). There was an opportunity here to challenge yourself which you didn’t take. Similarly, you understandably shift over to the assertion that the real responsibility is with society as a whole - in other words, that it’s a political issue. But you go through the rest of the paragraph without really advocating any particular political *solution*.
The challenge in writing in response to a topic like this is finding something distinctive and even important to say on the topic. How do *you* respond to Joy? What do *you* propose? You avoid taking a strong stand here - in other words, you nearly avoid having an argument. That, far more than your wordiness and mechanical problems, mar this essay.
Post a Comment