Friday, February 22, 2008

Joe Liu's informal blog # 4

Sorry for this late informal post. I usually try to post things before class, but I forgot to do so this week.

But anyways . . . I began thinking about some past essays we read and also Blade Runner while I was watching the movie Terminator 2.

If you haven’t seen T2, it is all about how robots are projected to take over the world. In the future, a human general would lead the human resistance to victory however. The future robots then, to attempt to secure their victory, send a robot (T1000) back into time to try to kill the human general, John Connor. However, the human resistance send back an old prototype robot (the Terminator, from the first movie) to protect John Connor.

This movie has seriously been like my favorite movie ever since I was young. I don’t know what it is about the movie (perhaps the special effects or the ways T1000 can morph into almost anything), but it always keeps me watching.

In a way, this movie directly relates to how robots can become so advanced that they would one day take over the world. What made me really think though was although the robots want to take over the world, what is their goal? Why would they care about taking over the world? At least in the movie T2, the robots really don’t have empathy. They can’t feel feelings, they can’t feel pain, they can’t really “think” on their own. All they can do is take in their surroundings and use the advanced programs in their systems to instantly calculate the probabilities of any situation and then make judgments based upon these figures. When human dictators and leaders want to take over, they do so for wealth, fame, respect, power, etc. I can understand that, but I can’t understand why a robot would want to do something like that. I know, it’s a dumb question, but I haven’t really thought about that until now.

In Blade Runner, in the end when the “chickenhead” Isidore began to interact with the 3 androids that took residence in his apartment, it made me really think about the “humanness” of the androids in the book. They are essentially humans, afterall. There are mental tests that can be used to catch the androids (or bone marrow scans), but if no one were to bother them, in my opinion, they could leave peacefully with the human race.

John Conner telling The Terminator why he shouldn’t kill people:
“I don’t’ care we gotta stop her . . . haven’t you learned anything yet? Haven’t you figured out why you can’t kill people? Look, maybe you don’t care if you live or die, but not everyone is like that. We have feelings. We hurt, we’re afraid. You gotta learn this stuff. I’m not kidding, it’s important.”

It’s funny I guess, because he is telling the machine to “learn.”

In the end of the movie, the T1000 is destroyed, and this quote ends it:
“because if a machine can learn the value of human life, maybe we can too”
John Conner’s life was saved and the earth was too.

I know this is a random post, but it is late and I spent too much time watching clips from the movie. I’ll most likely post a Zork blog sometime soon. My apologies if this post seems to end very abruptly

No comments: