Heimhenge wrote:Just sayin' that if you're talking about movies (like 2001) where an AI becomes malevolent, it seems like the reason for that failure should be part of the plot. I recall being disappointed in The Forbin Project for the same reason ... sure, after Colossus communicated with Guardian it "decided" it was OK to kill humans to get their data link reconnected, and later offers the excuse that it was trying to prevent war (which seems to be it's prime directive).
I'd say if the AI is designed to be a tool, and it acts like a tool, that's fine. In the Forbin Project
(haven't read it) that may have been something the designer considered "working as intended" to kill a few people to prevent a war.
When an AI follows the instructions literally, but not the intent, that can be fine. However it's been done so many times before, I'd say anyone writing that plot now should be subtle/clever in the logical hole the designers missed.
gmalivuk wrote:How is it realistic? How would you actually program any of the laws? Do you actually think anyone currently working on AI is trying to add these rules to their system? Do you think anything like the First Law would ever make it into a military robot?
So strong AI isn't a thing, and general purpose robots aren't a thing; we imagine a sci-fi world and ask what would be realistic if we assumed some things.
If we assume a robot understands (in common cases) "human" and "harm", and the robot it capable of an infinite number of tasks, then it makes sense to create a directive "do not harm humans". Similarly, photocopiers can copy an infinity of images, but they specifically can't copy US currency.
As I imagine it, somebody in super-science land figured out how to give a computer a reasonable understanding of a "Human" , "Harm", "robot", "obey" and action versus inaction. All robots contain this code so they can be future-fantasy robots. They contain the three laws on top of that. Their specific knowledge and tasks are on top of that.
As for military robots, they obviously wouldn't have a general "do no kill" command. But it would have a lot of specific ones, "don't kill civilians, don't kill allies, don't kill people who are surrendering, don't kill enemy medics".
The thing about recursion problems is that they tend to contain other recursion problems.