Ďaľšie referáty z kategórie
Artificial Intelligence - On Philosphy of
|Jazyk:||Počet slov:||1 966|
|Referát vhodný pre:||Stredná odborná škola||Počet A4:||6.5|
|Priemerná známka:||2.98||Rýchle čítanie:||10m 50s|
|Pomalé čítanie:||16m 15s|
Since technology is a result of intelligence, smarter-than-human minds would have an effect of the snowball once it occurs – smarter minds would be able to create even smarter minds...
AI and dualism
The existence of completely human-like (or smarter-than-human) artificial Intelligence will probably disprove dualism. If we are not be able to distinguish humans from their creation and if we admit, than humans cannot create ‘soul’, then the whole concept of ‘soul’ in men is undermined.
Ethics of AI
There arise many ethical questions with the prospective arrival of human-like artificial intelligence. Seeing robots as merely tools of humans, Isaac Asimov came with three basic ethical laws that should rule the future AI.
ASIMOV'S THREE LAWS OF ROBOTICS:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Some people would assign certain rights to the future humanoids. When Commander Bruce Maddox wants to disassemble Data (in “Star Trek: The Next Generation” series; episode “The Measure of a Man”) in order to create more androids for Starfleet's use, Data protests. “What follows is top-notch courtroom drama, made all the more tense by the fact that Riker is assigned the role of prosecutor. The moment wherein Riker simply switches Data off is chilling, calling into question the android's right to self-determination.”
With respect to humans, vital functions have always priority over any other goals. Now with androids, they will have to give up their existence in order to save a human life, save it from injury or to obey his/her order. Indeed, there are holes in the three-law system as in the case of ethical dilemmas, when a subject is facing a choice to pick certain people to survive determining others will die. It would not be possible to commit a suicide in presence of an android, or letting a person getting injured now with following saving many lives in the future. Asimov’s three laws of robotics will definitely have to be improved.