Thursday, August 30, 2007

Technology R Us

First Graded Blog

This is probably common knowledge, but you know the same guy who created the Nobel Peace Prize also invented dynamite? He was just trying to do good stuff for the world, you know, create easier ways to blast through mountains to make railroads or whatever. Did he think that people would use that to blow each other up? Probably not. It took someone thinking he was dead for him to realize the destructive potential of his creation. Is that what we all need? Mistaken obituaries?

In "Why the Future Doesn't Need Us" it seems like the utopian look of technology is that it's meant, in the end, to level the playing field. In theory, it'd be awesome to have robots so advanced that they could eliminate the necessity of human work forever and ever. My friend Alex is by far the sweetest hearted guy I know. He also happens to be some kind of mechanical and computer genius who is convinced that if he can somehow program artificial intelligence that everything will be better. Nobody would need money and so no one would have to work and everything would be free and we could spend our time appreciating the beauty of the world. He confessed this to me, adamant and full of determination and I, being the entertainment nerd I am, squinted my eyes and said "You're going to create Skynet."I'm also the girl who threatened to break up with her boyfriend because he was considering working on Project Aura and I was afraid of being the Will Smith in the story who would have to save us all from a rise of the machines. Though that might be kind of nifty.

I don't want to be cynical, but these things that we've read all seem to be saying the same thing, that no matter how advanced our technology gets and no matter their original functions, someone is going to suffer for it. The downside of the robot-run society in which everyone is free of the shackles of money and responsibility is the kind of thing we see in movies. Robots run rampant, gain minds of their own, blah blah blah "In order to uphold the 3 Laws, in order to protect you, we might have to hurt you." All I could think about when I was reading Marcuse was that law that they were going to pass that made it alright for the government to get into our business. Phone calls and emails and whatever else, this law was going to make it okay for the government to have access to any information all the time. In order to protect our freedom, it was necessary to take some of our rights away. He says "liberty can be made into a powerful instrument of domination."

So we're making all these jumps in technology so we can have this freedom. So we can get as much done for as little work and for as much profit as possible. For this ultimate freedom. But Joy is putting forth this argument that says if we're not careful (and really, when have we ever been careful?) then this technology on which we're relying to give us this freedom is going to ultimately spell our doom. In making shields from weapons, the byproduct is going to be stronger weapons. People are gonna die. People are dying, and not even in the socially complicated way like in "Life in the Iron Mills." A guy died playing World of Warcraft for too long. Blizzard didn't create the game thinking that it would kill someone, though I'm sure they made it purposely addictive.

I agree that it's how we choose to use that technology that's the freedom. Robert Shenck (I think) had an essay where he said that soon intelligence isn't going to be measured by what we know, but what we've experienced. Information is really easy to get at nowadays. But who does anything anymore? There was an old Nickelodeon commercial, it had to be in the early 90's where they were advertising Actual Reality. "Feel the wind in your hair, because you're holding the ball! Feel the sun on your face, because you're actually there!" I guess this was just when video games were beginning to get super popular and childhood obesity reached new levels. There are simpler ways in which technology is going to kill us.

But I think... well, I don't know. People are getting smarter. Things move faster, by the time I was in the 7th grade, my 3rd grade brother and I were studying the same things. Your parents ever say "hey, I didn't read that book you're reading now, in middle school, till I was in college!" Children are teaching their parents how to use computers. There's going to come a point at which they're teaching Human/Robot ethics in kindergarten. I think that we have to kind of trust that the next generation of thinkers are also going to be more educated in the ways that prevented Albert Nobel from seeing that dynamite could also kill people. And instead of just acknowledging the fact, that they'll have to means by which to save us.


Dan said...

Well, I would have to agree that it is HOW we use the technology that is either going to help or hurt us.

Artificial intelligence is a big issue here. What is going to stop the robots from turning against us? For this reason, we need to be extremely careful with what we do with new technologies. Make sure that the machines don't see us as their enemy. However, if we are careful, the future can bring wonderful things. Just imagine, cars that can drive you from home to work while you sleep, houses that clean and cook for you (or have their own robot maid), computers that do all of your work (even your taxes) and you don't even have to touch the keyboard. The list goes on. New technologies will make life easier, but only if we are careful. If not, it could turn around and kick us in the face.

Technically speaking, machines might (and probably will) be better than us. For this reason, we must gain their trust so that we can co-exist. If we don't, the machines will see us as nothing more than a pest that must be exterminated.

The future holds many dangers, but if we pay attention and actually do something about them, then maybe we won't have to worry about a machine uprising. If so, I am looking forward to what the future holds.

Adam Johns said...

I'd be leary of retelling stories "everyone knows" about someone like Nobel; I think it's a myth as much as anything. anyway, that's nitpicking...

Beyond that, there's lots of clever moments here. You successfully sketch out a relationship between Marcuse and Joy in one sentence, for instance. Your discussion of "actual reality" and death through video games (this mostly happens in Korean, right) is interesting but abbreviated - a potential topic that never gets much attention.

The compelling thing here -- where you use both Joy and Marcuse (weirdly and perhaps improperly fused) to analyze your own life --
is the closest thing here to being developed, but it's still importantly unfinished.

Why? You sketch out a real conflict in your life: you're a dystopian in close contact with two utopians. Bringing in skynet, will smith, etc. is funny, but then you don't continue to do the obviously thing (which is where Joy takes us).

What are the ethical issues involved? Here's how you might address it in a hypoethical midterm (likely fictionalized). What are the obligations of a dystopian faced with a succesful Alex (the fact that he's so nice nicely complicates things). Joy struggles to take us precisely into this territory with his move towards politics at the end...

Adam Johns said...

To clarify the last comment. Ultimately, here's my question. Why, instead of analyzes the implicit ethical problems more closely, did you jump beyond them into other issues (also interesting, but only briefly described)? How could you have kept your focus?