Robots evolving deceptive behavior
#1
Researchers are now testing robots that, given a few simple directives, are learning to weigh situations and use deception to achieve their given objectives.

Two separate researches:

http://www.technologyreview.com/blog/editors/24010/

http://www.sciencemagnews.com/researcher...avior.html

Another step towards AI? And also, possibly, a slippery slope.
PS. If you can, try your hand at giving some of the others a bit of feedback. If you already have, thanks, can you do some more?
Reply
#2
(09-27-2010, 09:15 AM)addy Wrote:  Researchers are now testing robots that, given a few simple directives, are learning to weigh situations and use deception to achieve their given objectives.

Two separate researches:

http://www.technologyreview.com/blog/editors/24010/

http://www.sciencemagnews.com/researcher...avior.html

Another step towards AI? And also, possibly, a slippery slope.
fascinating reads. i think it's certainly a step toward AI though only a small one. after all don't we use certain rules as well as emotions to define the way we act.

the though of future bots being able to lie or deceive could be a worrying thought.
what would be even more worrying is the could be programmed to like or dislike certain things that they choose to like or dislike.
Reply
#3
I think what's worrying is that people, no matter how tenuous, still try to operate under some kind of morality or moral compass. But is it possible to give robots a "moral compass"? Can they understand the difference between a white lie and more destructive underhanded acts?

For instance, in the first research, the robots who were competing for a "food source" learned to ignore their directive of signaling the other robots when they find food. This means that even if you program in "moral" directives (or a facsimile thereof), without the robot having a true conscience, wouldn't it become much too easy to ignore directives anyway? This might have more dire consequences if we put robots in situations where they have control over human safety or human life.
PS. If you can, try your hand at giving some of the others a bit of feedback. If you already have, thanks, can you do some more?
Reply
#4
(09-27-2010, 09:30 AM)velvetfog Wrote:  This opens up the possibility of creating robotic politicians in the future.
Hysterical

Oh, it hurts. So very bad.
PS. If you can, try your hand at giving some of the others a bit of feedback. If you already have, thanks, can you do some more?
Reply
#5
(09-27-2010, 09:42 AM)addy Wrote:  I think what's worrying is that people, no matter how tenuous, still try to operate under some kind of morality or moral compass. But is it possible to give robots a "moral compass"? Can they understand the difference between a white lie and more destructive underhanded acts?

For instance, in the first research, the robots who were competing for a "food source" learned to ignore their directive of signaling the other robots when they find food. This means that even if you program in "moral" directives (or a facsimile thereof), without the robot having a true conscience, wouldn't it become much too easy to ignore directives anyway? This might have more dire consequences if we put robots in situations where they have control over human safety or human life.
what happend to the three rules of robotics. or haven't they been invented yet? will they only be allowed to lie to other robots?

as for making them politicians; have you seen john macain?

Reply




Users browsing this thread: 1 Guest(s)
Do NOT follow this link or you will be banned from the site!