login about faq

My understanding is that Descartes considered the very inquiry of Artificial Intelligence (AI) to be futile. Does Objectivism offer an answer to whether or not AI is possible, or is this question of possibility a purely scientific one? If so, how can one decipher which concerns of possibility must be answered by either science or philosophy?

asked Jan 11 '11 at 18:11

Vince%20Martinez's gravatar image

Vince Martinez

Nothing in Objectivism rules out artificial intelligence as such, that is, the idea of creating a man-made organism possessing consciousness. After all, humans are made of material components, and Objectivism rejects the need for (and existence of) a supernatural soul.

However, Objectivism is at odds with some currently popular theories of AI. These include the idea that the human mind is a species of computer, which is a flavor of determinism; the idea that the standard for consciousness or thinking is the ability to trick another mind (the Turing test), which is a form of the primacy of consciousness; or the idea that in the future there will be a "singularity" of machine intelligence that will radically transform the world and humanity's place it it, which is an arbitrary claim.

While AI has had success in complex but algorithmically well-defined tasks such as playing chess, the dominant computer-based approach has continually fallen far short of the promises of human-like cognition that have been the staple of science fiction for decades. (It should be noted that computers do not really play chess in the manner that a good human player does -- by using principles of chess play -- but rather via a brute-force search of the possible outcomes of different moves. The delimited and unambiguous rules of chess, quite unlike most real-life problems that confront humans or even other animals, are what allow the computer to accomplish the same outcome by a different method.)

On the other hand, computers are presently valuable for their characteristics that are notably not human: the ability to perform rapid and flawless mathematical calculations, the unambiguous interpretation of instructions, and the perfect obedience of commands. It is not clear that a machine with human-like consciousness would even be a value to us.

answered Jan 11 '11 at 23:48

Andrew%20Dalton's gravatar image

Andrew Dalton ♦

edited Jan 12 '11 at 07:40

Sorry, but I have to demur on your account of AI's successes and failures. Readers should consult another source for specific information in this regard. I mention this because, for example, pattern-recognition is one of AI's relative failures, not its strong point. Also, AI was originally challenged as to being able to perform a feat of uniquely human intelligence, specifically playing chess, and we all know how that turned out. AI's strengths and weaknesses are an interesting subject, but they aren't what is intimated here.

(Jan 12 '11 at 00:36) Mindy Newton ♦ Mindy%20Newton's gravatar image

I edited the examples to include chess, which actually supports the more skeptical view of AI.

(Jan 12 '11 at 07:41) Andrew Dalton ♦ Andrew%20Dalton's gravatar image

I'm not sure how the singularity hypothesis is arbitrary.

I guess it rests upon the assumption that the development of consciousness is something which can be sped up?

(Jan 12 '11 at 21:53) anthony anthony's gravatar image
showing 2 of 3 show all

You might find your own answer to this question if you consider whether the project of artificial life is possible.

As far as questions of possibility, philosophy tells us contradictions cannot exist, so the possibility of anything contradictory is known to be zero, but nothing more specific.

answered Jan 11 '11 at 22:49

Mindy%20Newton's gravatar image

Mindy Newton ♦

edited Jan 11 '11 at 22:53

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Share This Page:



Asked: Jan 11 '11 at 18:11

Seen: 4,071 times

Last updated: Jan 12 '11 at 21:53