Heretical Ideas -- We challenge the orthodoxy, so you don't have to.:
"PHILOSOPHIC OPEN THREAD
A long, long time ago, in the short story 'Runaround,' Isaac Asimov developed the famous 'Laws of Robotics.' They are:
First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This concept of the Three Laws is enormously influential, and in a lot of robotics literature, it is almost assumed that some variant of these laws will be implemented when robots become complex enough to require them.
So here's the question: if artificial intelligence advances to the point where robots are roughly equal to humans in intelligence, would the imposition of the Three Laws in the manufacture of robots be moral or immoral?"
"PHILOSOPHIC OPEN THREAD
A long, long time ago, in the short story 'Runaround,' Isaac Asimov developed the famous 'Laws of Robotics.' They are:
First Law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This concept of the Three Laws is enormously influential, and in a lot of robotics literature, it is almost assumed that some variant of these laws will be implemented when robots become complex enough to require them.
So here's the question: if artificial intelligence advances to the point where robots are roughly equal to humans in intelligence, would the imposition of the Three Laws in the manufacture of robots be moral or immoral?"
Comments