login about faq

Suppose I program a robot to look at the outside world at regular intervals and process new information. Also, suppose that it can do integration and differentiation to build concepts, so not only external information is processed at regular intervals, but also internal one (hierarchy of concepts). For a regular interval take a milli-second.

Would that be a reasonable simulation of focus ? Could we say that a human being is a program that was started at our birth and runs in a loop, to process new information every minute ? Is this model missing something ?

asked Apr 27 '13 at 00:15

Bop's gravatar image

Bop
80212

edited Apr 27 '13 at 09:59

Greg%20Perkins's gravatar image

Greg Perkins ♦♦
1002425618

Well, one fact about man that is missing from the "model" is sleep. Happens every day (normally), for many hours.

(Apr 27 '13 at 22:06) Ideas for Life ♦ Ideas%20for%20Life's gravatar image

I don't see how "sleep" changes the essence of my question, which is that can we simulate our innate ability to "focus" as the first step in free will / volition, using a computer that wakes up at some interval to have opportunity to do a task such as look at some object and reach it with a robot hand.

(Feb 11 '15 at 00:03) Bop Bop's gravatar image

The question describes a computer-like approach to simulating man's capacity to focus his conceptual faculty, and then asks:

Could we say that a human being is a program that was started at our birth and runs in a loop, to process new information every minute ? Is this model missing something ?

In an early comment, I pointed out that the model doesn't account for human sleeping. The significance of sleep is that it represents a process of one going out of focus as one falls asleep, and then one returns sometime later to a state of consciousness in which one can consciously focus one's conceptual faculty or not. An accurate simulation, representative of man as he actually is, would need to explain that process, including the question of how one "chooses" to focus if one isn't already in focus. How is it, exactly, that we can be completely out of focus while sleeping, then wake up and be able to choose to focus or not? Sleeping and waking up offer a potentially rich source of direct personal observation regarding conceptual focusing. I can think of a possible computer-like answer if we start from the premise that man's mind is capable of simulation by a computer in the first place. But how well does such speculation match the actual reality of living human consciousness?

Moreover, merely matching the result of a thought process founded on the "computer premise" doesn't prove that man's consciousness uses the same kind of mechanism as the computer. That would be akin to claiming that if it is true that "A implies B" and "C implies B," then A and C are equivalent. Conceptual focusing in man could be an entirely different process that happens to have some similar effects as what man might be able to program into a computer.

Also, suppose that it [a programmed computer] can do integration and differentiation to build concepts....

That is a huge supposition. The intellectual burden of proof of that claim is enormous. So far, computers have only been able carry out tasks that man programmed them to do, tasks that computers are capable of doing but which do not necessarily span the full spectrum of man's conscious capacities. Computer software continues to become more advanced, of course, as man refines his ability to build more powerful computers and program them more effectively. To my knowledge, it remains a huge challenge today in the field of "machine intelligence" just to simulate consciousness that functions automatically on a sensory-perceptual level, as in all non-human animals. (And man, too, has a sensory-perceptual level in addition to his conceptual capacity.) Take conceptual cognition and focusing out of the task, and the task still remains formidable.

I suspect that far greater progress on understanding man in relation to computers will become more feasible when man learns how to make computers better able to do what non-human animals do, starting with sensory-perceptual cognition, automatic goal-directedness, and automatic (reflexive) evaluation on the perceptual level. A key question would be: what else does a computer need to do with percepts in order to turn them into a human-like concept? That question, in turn, would depend on an answer to the question: what does a computer need to do with sensory data (from multiple sensory modalities operating simultaneously and continuously) to form an equivalent of perception? This further presupposes a means of implementing multiple sensory modalities into a computer system in the first place. Today there are, of course, tactile sensors (though nothing like the sense of touch throughout the skin of animals), sound sensors, and cameras, but how does a computer system put them all together into something resembling sensory-perceptual cognition in animals? I don't think computers are quite able to do that very well so far, although I've seen stories in the news showing remarkable progress in some cases.

answered Feb 12 '15 at 01:18

Ideas%20for%20Life's gravatar image

Ideas for Life ♦
467718

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Share This Page:

Tags:

×29
×12

Asked: Apr 27 '13 at 00:15

Seen: 815 times

Last updated: Feb 12 '15 at 01:18