Artificial Intelligence - On Philosphy of
Any sufficiently advanced technology is indistinguishable from magic. -Arthur C. Clarke
The notion of artificial intelligence is a broad and extensive theme within the science of philosophy. Therefore it is not, and in fact cannot be, my intention here to introduce you to all aspects of this discipline. More willingly, I will briefly discuss the basic concepts of the subject such as the dispute over the definition of artificial intelligence and some problems arising from this topic in a rather popular way.
What is Artificial Intelligence?
It is a fact that we all have a certain idea of what an artificial intelligence (AI) should be like, but not many of us have really thought over this concept enough to tell exactly. This ambiguity in defining AI (depending on the definition of intelligence) can be found also in the work of scholars.
Therefore, I will use the recognized Encyclopedia Britannica (EB) defining artificial intelligence as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.”
This having said, artificial intelligence would have to perform tasks such as learning (ability to adapt to new circumstances), reasoning (drawing deductive and inductive inferences), problem solving (search through a range of possible actions in order to reach some predefined goal or solution), perception (recognition), and using language (communication). As Marvin Minsky argues, and I thought about this before working on this project actually, the understanding of the concept of intelligence shifts with the advance of technology. The more we know about ourselves the further the lower boundary of intelligence is. .. This would imply that creating a strong AI is not possible. According to Minsky, this is so, simply because the term intelligence always means for us those characteristics, that we still cannot scientifically describe.
An example could be found in the computers playing chess. A hundred years ago, nearly anyone would agree that a machine playing chess is intelligent.
Now, with computers such as Deep Blue, beating the world’s best chess players, we realize that the game strategy of chess can be described in relatively simple algorithms and that it is only the computing speed that beats Kasparov. Just like a car is faster than Michael Johnson...
If a cat does something, we call it instinct; if we do the same thing, for the same reason, we call it intelligence. -Will Cuppy
In a wider sense, under the term ‘intelligence’ we, actually, mean almost all but the simplest human behavior (quite contrary to the quote). I will follow with an example:
“What is it?”
“It is.. it is .. it is green.”
(Scott and Data trying to classify a beverage; Star Trek - The Next Generation)
From the Star Trek dialogue above, it seems that there has to be a certain role of socialization in the human intelligence. A lot of what we consider as intelligent behaviour is acquired during the process of socialization through which we develop an awareness of social norms and values, appropriate ways of behavior, cognitive categories and basic knowledge about the world around us.
It can be argued, on the other hand, that such a cultural orientation cannot be considered as a part of the ‘basic intelligence’ that defines a human being. People from different cultural backgrounds may not behave in a certain situations in a way proper for the (culturally familiar) observer, but they cannot be called unintelligent because of this lack. Stressing the ‘learning’ component of the EB’s definition can solve this quarrel. Even a thing that knows nothing but can learn is an intellect, such as a human baby. Strong AI – Weak AI: Minds or imitation of minds?
It is already common linguistic practice to describe computers as having memories, making inferences, understanding one language or another, and the like, but are such descriptions literally true or simply metaphorical?
In general, there is a basic distinction in the artificial intelligence approaches between weak and strong AI. The term strong AI was introduced by John Searle in 1980 and aims at building machines that can think. Its’ ultimate ambition is to “produce a machine whose overall intellectual ability is indistinguishable from that of a human being”. This group holds that human intelligence itself consists of the very computational processes that could be exemplified by advanced machines, so that it would be unreasonable to deny the attribution of intelligence to such machines.
Weak AI, on the other hand, argues that computers can only appear to think and are not actually conscious in the same way as human brains are.
To stress it again, these two approaches are not two types of AI, but rather two ‘schools of thought’, believing or not believing in the possibility of creating real human-like robots.
When a distinguished but elderly scientist says that something is possible, he is almost certainly right. When he says it is impossible, he is very probably wrong. -Arthur C. Clarke
Is strong AI possible?
In 1637, the French philosopher and mathematician René Descartes predicted that it would never be possible to make a machine that thinks as humans do. Noam Chomsky, American linguist and political activist, suggests that deciding whether machines can ‘think’ is pointless, because it is determined by the arbitrariness of the definition of ‘thinking’. Nevertheless, the important question “could it ever be appropriate to say that computers think, and, if so, what conditions must a computer satisfy in order to be so described?” remains. One of the most commonly used procedures how to define the AI is the Turing test.
Turing test
A test to determine whether a computer can ‘think’ was introduced by the English mathematician Alan M. Turing in 1950. Turing argued that “if a computer acts, reacts, and interacts like a sentient being, then call it sentient”. With this definition he evaded the difficulty of distinguishing ‘original’ intelligence from simple though sufficiently sophisticated machine ‘parroting’.
During the test, a human interrogator is to interview ‘the subject’ within a certain timeframe (e.g. five minutes) in order to decide whether it is a human or a computer. The computer’s ability to ‘think’ would then be measured by its’ success in being misidentified for a human being.
Turing expected, that by the year of 2000 less than 30% of average people would be, after five minutes of interrogating, able to correctly assess that the subject under study is a computer. No machine has yet come close to this point. Actually, there have been experiments done with bots, for example on frequently visited chat portals.
The critique of this test focuses on the mere conversational ability that is being tested, avoiding all other aspects of human intelligence. A refute of the Turing test can be found in the so-called Chinese room experiment.
Chinese room argument
John A. Searle came with the argument that the Turing test cannot be a satisfactory criterion for proving intelligence. In his article “Minds, Brains and Programs” Imagine a closed room with a non-Chinese speaker getting Chinese symbols through a slot in the door. For him, the symbols are just many squiggles and squoggles. But he has an English rulebook that tells him how to manipulate the symbols and which ones to send back out.
To the observer outside, whoever or whatever is inside the room is leading an intelligent conversation. But the man inside does not understand Chinese at all. He shows that a formal program is not enough to produce semantic understanding or intentionality. Searle would not argue that mind is not a machine, but would deny that the execution of an information-processing program is enough for any machine to duplicate the mind’s genuine understanding.
This leads to another philosophical question, and that is, whether the programmer must always be more intelligent than his/her creation. Or, does the intelligence of a machine show only the intelligence if its’ creator? Chinese room argument points to the affirmative stand to this problem. Saying this, there is a Christian argument why people cannot understand God and neither themselves completely – supposedly because it is the same situation as if a sculpture would understand the sculptor. This would mean that the AI can not understand itself and therefore can not act intentionally or have its’ own mind.
Looking Ahead
The last 50-60 years made it clear that the development in the information technology industry has been and most probably also will be tremendous. The trend of possibility of producing more and more intelligent machines is apparent, pronounced with the discoveries in the computer area as well as those in neuroscience and psychology. The public along with many scientists like to believe that it is merely a matter of time before artificial intelligence reaches the human levels of intelligence. Their basic argument is that any human act can be reduced to algorithms that can be imitated by a computer program. Admitting that this is possible, we would have to accept a lot if not all of the deterministic theory saying that humans are ‘only’ an assemblages of matter and energy that can be described by laws of science - denying the concept of free will. Nevertheless, this problem is being at least partially solved by introducing randomness into the behaviour of future androids .
Singularity
The point in the history of mankind when our technology will cross the fundamental limits of human intelligence is called the ‘singularity’ (according to a similar concept used to describe a black hole).
Since technology is a result of intelligence, smarter-than-human minds would have an effect of the snowball once it occurs – smarter minds would be able to create even smarter minds...
AI and dualism
The existence of completely human-like (or smarter-than-human) artificial Intelligence will probably disprove dualism. If we are not be able to distinguish humans from their creation and if we admit, than humans cannot create ‘soul’, then the whole concept of ‘soul’ in men is undermined.
Ethics of AI
There arise many ethical questions with the prospective arrival of human-like artificial intelligence. Seeing robots as merely tools of humans, Isaac Asimov came with three basic ethical laws that should rule the future AI.
ASIMOV'S THREE LAWS OF ROBOTICS:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Some people would assign certain rights to the future humanoids. When Commander Bruce Maddox wants to disassemble Data (in “Star Trek: The Next Generation” series; episode “The Measure of a Man”) in order to create more androids for Starfleet's use, Data protests. “What follows is top-notch courtroom drama, made all the more tense by the fact that Riker is assigned the role of prosecutor. The moment wherein Riker simply switches Data off is chilling, calling into question the android's right to self-determination.”
With respect to humans, vital functions have always priority over any other goals. Now with androids, they will have to give up their existence in order to save a human life, save it from injury or to obey his/her order. Indeed, there are holes in the three-law system as in the case of ethical dilemmas, when a subject is facing a choice to pick certain people to survive determining others will die. It would not be possible to commit a suicide in presence of an android, or letting a person getting injured now with following saving many lives in the future. Asimov’s three laws of robotics will definitely have to be improved.
|