|
|
I agree with Pepsi. Human thought processes are mimicked in AI either for the purposes of psychological research into humans, to test theories, or to steal the most effective bits for practical applications. In chess, for instance, programs that pattern-match in a way similar to how it's thought humans do (grouping pieces etc) are interesting, but don't tend to perform as well as brute-force algorithms. If you're a psychologist, that's still interesting. If you just want to build a program that can win at chess, you won't be using that approach.
Asimov's laws are applicable to human-like entities with the ability and knowledge to work out the consequences of their actions. Real AI, for a long while yet, is and will be increasingly small and smart tools to do specific jobs. You could say that there was intentionality in a heat-seeking missile but it's at the same level as a cockroach running away from light.
There's no point in equipping your robot gun emplacement with the ability to calculate the socioeconomic outcome of killing people - it's too difficult (humans have trouble enough), it makes the system unpredictable, and you don't care anywhere. In fact, you might even deliberately limit the ability that it has to discriminate between civilians and military targets, in the same way that the intelligence required to launch bombing missions is often a lot less specific than humanitarians might wish. Once your robots start thinking they become entities, not tools, and you can't call their actions "tragic but unavoidable accidents" any more.
Asimov's stories weren't about real robots, anyway, they were philosophy of mind and ethical speculation. He could have written them about golems, or clones, or any entity created by humans. |
|
|