Tuesday, December 9, 2008

Thought, AI, and the Nova

While you read this, the sun is getting older (Lyotard 8). It is burning up its stores of energy until one day, some five billion years from now; there will be no more. The inevitable death of our sun brings forth the inevitable death of all life on earth and the death of all thought. Questions will also cease to exist. Not just simple yes or no questions, but also the answerless questions. The questions philosophers spend years of their lives thinking up the answer to. You have heard these questions before -- “if a tree falls in a forest and no one is around to hear it, does it make a sound?” Why waste the time in pondering these questions? We, as a society of human beings, need to focus not on these answerless questions, but on a way to preserve and continue our thinking and cognitive abilities beyond the time when there is no sun to give us energy, no moon to look up at, and no earth to live on. We need to develop a way to continue human thought beyond the time of the sun's death. If we don't, then "after the sun's death, there won't be a thought to know that its death took place" (Lyotard 9).

To continue human thought after the sun’s inevitable death will require us to examine what “thought” is. Next, we must look into artificial intelligence as a way of carrying and further developing human thought after the sun’s death and materials that would be able to survive the death or technologies that will enable thought to outrun the effects of the cosmic death.

First, what is thought? Is it something we are capable of producing from birth? I mean is the ability to produce advanced thought available at birth or does it need to be developed? Jean Piaget, a developmental theorist, believes cognitive abilities are developed and development can be recognized in four stages; sensorimotor, preoperational, concrete operational and formal operational (Bertocci).

The first stage of development, sensorimotor, occurs between a child’s birth and about 2 years. These children experience the world through movement and their senses (Bertocci). Take for example a baby. If the baby is hungry, it cries. If it poops, it cries. They know when they want or need something, but they cannot express it like you or I can. The mind at this stage of development starts to develop object permanence. So when you are playing with a baby, and you hide a toy under the blanket, the baby knows the toy was there in your hand but they think it disappeared because they cannot see it anymore. Or when you are playing peek-a-boo and you hide. The baby cries because it does not know where you are and it gets scared. But when you pop back out, the baby stops crying.

The next stage of development, the preoperational stage, occurs when the child is between the ages of about two and seven. Here, the child’s semiotic capabilities increase and rapid language development occurs (Bertocci). For example, are you thinking now as you read this essay or are you just reading? If you are just reading over this essay seeing letters and putting them together to form words, are you thinking? Surely you are. You may not remember it but letter recognition and reading was once a skill you had to learn. Through my three-year tenure at Jumpstart, a program working towards the day every child enters school prepared to succeed, I spent months, if not the whole year of being partnered with my preschool child working with him on letter recognition. Not even reading, just being able to look at a word, recognize and tell what the letters are that make that word up. It was a slow and arduous task, but necessary. Without being able to recognize letters a person would not be able to read or write.

The third stage, concrete operational, is existent in a child between the ages of seven to eleven years old. Here, cognitive development is at a point were conservation is acknowledged (Bertocci). No longer will a child put a quarter-sized blob of glue on paper to glue an elbow noodle. The child can also think logically about concrete events, conceptualize things like math with numbers but not items and can follow the trial and error approach to problem solving (Bertocci). Say you don’t have money to buy your wife’s life saving medication, so you walk into the pharmacy and steal it. Was that wrong? Well, a mind in the concrete operational stage of development would say yes it was wrong, even though you did it to save her life. They know the difference between right and wrong but struggle with bringing in the abstracts to make a decision.

The final and most advanced stage of development is called formal operational. This stage encompasses most children and adults over the age of 11. Take note that not everyone is able to perform cognitive processes at the formal operational level. These processes consist of abstract thinking development such as hypothetico-deductive reasoning, inductive reasoning, and deductive reasoning (Bertocci). This is the stage that we want our artificial intelligence to reach and what I will discuss in the greatest detail.

In his essay, Can Thought go on without a Body, Jean-Francois Lyotard states,

In what we call thinking the mind isn’t ‘directed’ but suspended. You don’t give it rules. You teach it to receive. You don’t clear the ground to build unobstructed: you make a little clearing where the penumbra of an almost-given will be able to enter and modify its contour (Lyotard 19).

In other words, to be a formal operational thinker, the mind must be able to receive an idea and enhance it; similar to a tiny acorn being planted and turning into a giant oak tree.

Lyotard also states, “thinking and suffering overlap,” (Lyotard 18). This is especially prevalent in the formal operational thinker. The majority of students who attend universities are formal operational thinkers. If not, then they are when they leave. I believe it is the university’s main objective to create people who can think on their own and not just teaching the students a specialized skill like geology or engineering. They want to create people who, when given a problem, will rise to the occasion and be able to solve it without anyone holding their hand. The universities have their professors challenge the students. They give their students a prompt (the seed) and have the students expand upon that prompt say in an essay or on an exam (the oak tree). Often times the students complain at how hard the assignment is. They take the caffeine addiction to a whole new level while pulling off the all-nighters to finish that paper or cram for that exam. It is never pleasant and always full of suffering. But, no pain no gain.

To take it to the next level, or should I say the next level of formal operational cognitive abilities, will require the most suffering. The majority of university graduates go right into the job market, but some stick around. They pursue the higher degree. They suffer for anywhere between two years and the rest of their life thinking, researching and analyzing what has not been thought. “The unthought hurts,” Lyotard says, “because we’re comfortable in what’s already been thought. And thinking, which is accepting this discomfort, is also, to put it bluntly, an attempt to have done with it.”

All right, so we understand what thought is, its stages and its development. But where are we now in developing an analogous machine or software that can mimic the process of human thought and think on its own. For this we look into artificial intelligence and the cutting edge research.

First, what is artificial intelligence? John McCarthy writes,

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable (McCarthy).


Artificial intelligence, in other words, is a computer program that can think on its own. Some forms are based on ways similar to how humans think while others are based on pure computational power, such as the chess programs of today. These programs play at the grand master level and have little intellectual prowess as compared to the human chess player, but they compete by substituting large amounts of computation of moves for their lack of intellectual capabilities (McCarthy).

Secondly, what are different branches of artificial intelligence and is every branch necessary to make a software package that can further human thought beyond the sun’s inevitable death? Don’t get your hopes up, we are no where close to being able to produce robots like those Hollywood creates such as IRobot and 2001: A Space Odyssey. The artificial intelligence and its branches are merely computer programs.

The first of which is based on logical artificial intelligence. It is “what a program knows about the world in general, the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals (McCarthy2).

Another branch is the ever-evolving branch of search. Here, the program examines large numbers of possibilities like moves in a chess game or inferences by a theorem-proving program (McCarthy2).

Pattern recognition is another branch of artificial intelligence and it does what the name describes it as. It is a program that “makes observations of some kind, [and] is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most” (McCarthy2). Geoffrey Hinton at the University of Toronto published a paper on this very branch. In figure two of his paper, To Recognize Shapes, First Learn to Generate Images, he shows an image of hand written digits that are difficult to recognize. The artificial intelligence program, he calls it a neural network, recognized all of the digits and got them all right. However, the program was not confident in its answers but nonetheless it got them all right (Hinton).

Another branch of artificial intelligence is the branch of inference. Here scientists are working on getting programs that can allow the program to infer things from facts given (McCarthy2). Take for example a bird. We infer that the bird flies, but when we find out it is a penguin, our inference reverses and we infer it cannot fly. The scientists are working on a program that will do just this.

Common sense and reasoning is a branch where a program is being written that is the farthest from human similarity despite being of research since the 1950’s (McCarthy2). Where as another branch, learning from experience is making great strides in its development. There are learning laws expressed in logic and programs can only learn what facts or behaviors their formalisms can represent… [U]nfortunately, learning systems are almost all based on very limited abilities to represent information (McCarthy2).

The branches I just listed are used in applications like game playing, speech recognition, understanding natural language, computer vision- interpreting the two dimensions as seen from a camera into three dimensions, expert systems – a system that interviews an expert in a field and embodies their knowledge, and heuristic classification – a system that is given information and puts it in categories for later use like deciding whether or not to accept a credit card purchase (McCarthy3).

It is difficult to link where we are with the development of artificial intelligence to Piaget’s development of cognitive abilities. Some of the artificial intelligence is at a preoperational level like the work being done by Geoffrey Hinton and number recognition, while others are below and above that level. Some artificial intelligence is learning how to interpret what it sees and other artificial intelligence can beat a grand master in the game of chess. McCarthy states,
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don't develop till they are teenagers may be in, and some abilities possessed by two year olds are still out (McCarthy).

Where artificial intelligence comes into its short fall is where its designers lack the understanding of how to put intellectual processes into code.

Obviously we are not at a time where we can say we have completed our research in artificial intelligence and the artificial intelligence is capable of furthering human thought. We still have some work to do, but we do have four and half billion years left. So what else can we add to the artificial intelligence that will make it more then just a computer program that can process information and events like the human brain? Lyotard suggests having the machines suffer. Remember, he stated that learning is suffering. So if we have the machines suffer then they can learn from their suffering and advance even more. I think we should also focus on making machines that can also reproduce and create evolutionarily better machines to even further human thought. Eventually, I see the time coming where our human thought is long gone and what is left is synthetic thought created by artificial intelligence. That doesn’t bother me because thought, like life, is evolutionary and is bound to change but the building blocks, the ones we humans created, will still be there.

All right, so we have the software, now how do we make it survive the sun’s death? Luckily for us, our sun is of a lower mass and won’t blow up like a larger mass star does when it nova’s. However, it will still have a violent death. The sun will expand past the orbit of Mercury, the luminosity will increase by 100,000 times and there will be a point called the helium flash when the star regains stability in its red giant stage. After this, the sun will collapse in on itself, expand again this time burning carbon not helium and then the outer shell will drift off at a rate of tens of kilometers a second. Thus leaving behind a planetary nebula and a white dwarf (Chaisson 322). So, knowing that our sun won’t go through too much of a violent death, we can begin to theorize ways to avoid its final stages. The final stages will emit massive amounts of radiation, unlike anything we encounter today. I believe this will be one of our biggest challenges. So, we have two options here. Build something that can protect our artificial intelligence from the radiation or build something that can out run the radiation.

For the first option, we would need some strong and protective materials. As of now, the strongest material in the world is the carbon nanotube. It has a specific strength that is 312 times greater that of steel (Carbon). This could work, but as of today we have only created nanotubes on the millimeter scale. Option two would be to build a ship that can outrun the solar radiation. We would need something that would be able to travel close to the speed of light, the speed limit of the natural word as far as we know. Again, more work is needed in this field as well.

Luckily for us, we have the laboratory that can test these materials or machines we create for their strength, durability and if they will be able to perform in outer space. The lab is called NASA’s Spacecraft Chamber of Horrors and is located at the Goddard Space Flight Center outside of Washington D.C. in Maryland. The first of the many tests the machinery would undergo is being placed in a centrifuge that will whip the materials around so they will experience the gravitational (G) forces (up to 30 Gs) they will expect to see on launch and in flight (NASA). The materials can also be shaken on a number of vibration tables that simulate the vibrations they endure during launch or space flight (NASA). Materials would be able to be tested in an acoustics chamber that will enable scientists to determine if the material will be able to survive launch. One of the most crucial tests is the nicknamed ‘the rack.’ It is called the Super Lightweight Interchangeable Carrier load test facility where an object is placed in it and is pushed and pulled by pneumatic cylinders to simulate the forces that could be felt in space flight (NASA). Another chamber is the electromagnetic interference chamber. In this chamber, radio waves are blasted at the object in order to test if the radiation will disrupt the operations of the machine (NASA). The final, most important test is being placed in the Space Environment Chamber. This chamber creates a vacuum like space and can heat and cool from 300 to -310° F (NASA).

Now, I just spent over 2,900 words describing why it is necessary to focus our energies not on answerless questions but on surviving or at least discovering a way to give our thoughts a chance at surviving the nova. But why did I waste my time, or your time in doing so? Some people think this concept of surviving the nova is a bunch of hogwash and unnecessary. They have a point. Who is to say we can, as a society, make it that far. I mean take a look at our struggle for energy. We go to wars because of it. Soon, if the predicted effects of global warming turn out to be true, we will be going to war over the basic building block of life, water. Also, look at where technology has gotten us. It has allowed for us to create the atom bomb, the abilities to create viruses and chemicals that can eradicate all life on earth. Lets not forget to mention man’s track record with sacrificing the rest of the world for individual benefits – our current economic crisis comes to mind.

This outlook doesn’t look so good. Bill Joy says it best in his article, Why the Future Doesn’t Need Us. He says, “Our most powerful 21st-century technologies – robotics, genetic engineering, and nanotech – are threatening to make humans an endangered species. His opinion is that we humans will create technology that will put humans in danger, maybe even extinct.

But that is a rather pessimistic outlook. Personally, I am a glass half full type of guy, not a glass half empty. So, lets examine the good that can come from continuing our thought. If we get to the point naturally, the way we are heading now, we may get there eventually. I mean we have almost five billion years left, right? Lets examine what has been done since the dawn of civilization -- Stonehenge and all of its predictive capabilities, the pyramids and modern day metropolises, suburban sprawl, indoor plumbing, electricity, televisions, internet, artificial intelligence, rockets, the International Space Station and landing on the moon. This list can go on forever. Just think, everything we humans are and created up to today have only really been in development for around five thousand years. We have at least four billion years until we face our inevitable doom. Think of what we can do!

Also, Earth cannot be the only planet that life calls home. What if there were other planets, like us, who are having similar problems? Wouldn’t it be great if our thought machines reached those planets and helped the life forms inhabiting the planets advance to where we were when our sun died? Think of the possibilities then! Now, instead of just four billion years, the influence we as humans can have on other life forms is infinite!

I agree with Lyotard in that we should quit asking ourselves answerless questions. They won’t get us anywhere or give us answers. All they do is provide entertainment for some, headaches for others, and mind training for deeper thought. So lets look into how we can allow human thought to survive past our sun’s death. We may be able to influence other life forms or at least save some lives by allowing them to learn from our mistakes. History does repeat itself.

Works Cited

Bertocci, Michele. "Cognitive Development." University of Pittsburgh: Introduction to Educational Pyschology. Wesley W. Posvar Hall, Pittsburgh. Sep. 2007.

"Carbon Nanotube." Wikipedia. 2008. Wikipedia Foundation, Inc. 9 Dec 2008 .

Chaisson, Eric, and Steve McMillan. Astronomy A Beginner's Guide to the Universe. 5th ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007.

Hinton, Geoffrey. "To Recognize Shapes, First Learn to Generate Images." Home Page of Geoffrey E. Hinton. 26, Oct. 2006. University of Toronto. 9 Dec 2008 .

Lyotard, Jean-Francois. “Can Thought go on Without a Body?” [photocopy].

McCarthy, John. "Basic Questions." John McCarthy. 12 Nov 2007. Stanford university. 9 Dec 2008 .

McCarthy2, John. "Branches of AI." John McCarthy. 12 Nov 2007. Stanford university. 9 Dec 2008 .

McCarthy3, John. "Applications of AI." John McCarthy. 12 Nov 2007. Stanford university. 9 Dec 2008 .

"NASA's Chamber of Horrors." HowStuffWorks Videos. NASA. 9 Dec 2008 .

1 comment:

Adam Johns said...

Minor note: you need to learn the difference between semicolons and colons.

I think this is very good work. I'm not going to go into all of the details - I touched on some of them in the draft - but let's take this example: "In other words, to be a formal operational thinker, the mind must be able to receive an idea and enhance it; similar to a tiny acorn being planted and turning into a giant oak tree." What you are successfully doing is bridging Lyotard and contemporary neuroscience using a clear and, I believe, accurate metaphor. That's just fantastic, and much of the essay is effective in the same way. I was already very interested in Lyotard, obviously, but you've pushed that interest farther.

The learning and suffering discussion is similarly very nice.

Your discussion of AI is good; I'm glad you acknowledge the difficulty of bridging that with Piaget. I still wonder if there is a way you could have done more here, though. If there's one thing I wish for in this paper, it's that you had done more to explain how AI is moving toward the Operational level - or imagined how it might do so. Discussing suffering helped, but that discussion didn't seem complete either.

Your discussion of hardware was good, but a little rushed - I'm referring here most of all to the proofreading, which got sloppy at points.

While I thought your conclusion was pretty good, I can imagine ways of introducing more depth to it. One thing I might have touched on (obviously this is personal, and not your direction) is the possibility that the only way of taming our self-destructive urges is by taking the possibility of a long future seriously - a sort of mental/philosophical preparation for the long haul.

Despite the various flaws mentioned, your attempt to make Lyotard practical is smart and detailed, and you deal with a huge topic in reasonable detail within a reasonably compact paper.