BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


Killed by a drone

 
 
tango88
13:29 / 06.11.02
Al-Q members in Yemen killed by an unmanned CIA drone :
http://www.cnn.com/2002/WORLD/meast/11/05/yemen.blast/index.html


As soon as I heard this, I thought of Isaac Asimov's three laws of robots:

1. Robots must never harm human beings.
2. Robots must follow instructions from humans without violating rule 1.
3. Robots must protect themselves without violating the other rules.


After all, it can't be long before they stick AI into these drones and have them hunting 'bearded militants' all over the Middle East.

My question is, How long will it be before Asimov's three laws are broken? Should intelligent machines (for example, working as security guards) be allowed to harm humans?
 
 
Pepsi Max
00:58 / 07.11.02
My question is: Since when have Ike's Law been in force?

Machines kill people every day. Some of them have sophicated digital processors in them.
 
 
tango88
05:09 / 07.11.02
OK Pepsi, let's add the word 'intentionally' to no. 1 to make it clear.
 
 
Pepsi Max
07:22 / 07.11.02
tango88> Yeah, but do computers/robots currently do anything "intentionally"?

We don't have intelligent machines as I believe you are thinking of them. Ike's Laws have zero relevance for today's machine systems. How would the average computer/robot recognise a human being anyway?

Ike's Laws presuppose a world where human life isn't cheaper than cutting-edge technology. Or political necessity for that matter.
 
 
tango88
11:09 / 07.11.02
Pepsi,

I agree with your last point but I still think you're missing out the idea that while we don't have very intelligent machines today, they're not far down the road.

I'm not the unabomber but I can see several trends in place now that point to a dark future. I see the technology being developed and also the motive.

Here are a couple of links: there's a lot to disagree with in the articles but a lot to agree with too.

http://www.asee.org/prism/nov00/bad_dream/bad_dream.cfm
http://www.cnn.com/TECH/science/9802/18/swiss.robot/
http://www.abc.net.au/science/news/enviro/EnviroRepublish_52593.htm
http://www.globalideasbank.org/wbi/WBI-118.HTML
 
 
Pepsi Max
12:03 / 07.11.02
Basically we're talking The Terminator here, no?

Let's split your argument into two, inter-linked parts:

1. The intelligent machines bit. Interesting question but you're looking at it the wrong way. Ike imagined robots as thinking like human beings - but with specific limits on their actions due to programming. His robots lie, charm, play, go bonkers, etc. They're very anthropomorphic. But 'artificial intelligence' isn't really going down that road. The project to recreate the human mind in silicon is a dead end. Instead, machines are being designed that solve problems - but not necessarily in a human-style way. They're far closer to insects (collective swarming) or even bacteria in terms of their techniques than humans. Applying Ike's Laws to these beasties is like trying to sue a locust. Non-sensical.

2. Machines being used to hurt or kill humans. Well, we already use machines to track and intervene: electric fences, surveillence equipment, drone aircraft.

Do I think there'll be robots in cop suits caught delivering Rodney King-style beatings by rogue spy satellites and then going off to chat about it over plastic donuts?

No.

Do I think that states will use whatever technology is available to control their own populaces?

Yes.

I agree with your last point but I still think you're missing out the idea that while we don't have very intelligent machines today, they're not far down the road. I'm not the unabomber but I can see several trends in place now that point to a dark future. I see the technology being developed and also the motive.

That's so last scene from Terminator.
"There's a storm coming..."
 
 
tango88
13:59 / 07.11.02
Well in fact there is research into robots that think like humans. Plus, there is an increasing desire to make robots look like humans, have facial expressions etc. as well as engage in human activities such as playing chess, football etc. You might say that computers approach chess in a different way, for example, but chess playing computers could still be considered in their infancy.

Personally, I do see robots delivering beatings. (Without the plastic donuts of course). But the point that I was trying to make originally was that all this seems to be coming about much faster than a lot of people imagine and I reckon it'll catch people off guard.

I don't remember the last scene from Terminator but I do really think that technology is one thing worth being concerned about.
 
 
w1rebaby
15:31 / 07.11.02
I agree with Pepsi. Human thought processes are mimicked in AI either for the purposes of psychological research into humans, to test theories, or to steal the most effective bits for practical applications. In chess, for instance, programs that pattern-match in a way similar to how it's thought humans do (grouping pieces etc) are interesting, but don't tend to perform as well as brute-force algorithms. If you're a psychologist, that's still interesting. If you just want to build a program that can win at chess, you won't be using that approach.

Asimov's laws are applicable to human-like entities with the ability and knowledge to work out the consequences of their actions. Real AI, for a long while yet, is and will be increasingly small and smart tools to do specific jobs. You could say that there was intentionality in a heat-seeking missile but it's at the same level as a cockroach running away from light.

There's no point in equipping your robot gun emplacement with the ability to calculate the socioeconomic outcome of killing people - it's too difficult (humans have trouble enough), it makes the system unpredictable, and you don't care anywhere. In fact, you might even deliberately limit the ability that it has to discriminate between civilians and military targets, in the same way that the intelligence required to launch bombing missions is often a lot less specific than humanitarians might wish. Once your robots start thinking they become entities, not tools, and you can't call their actions "tragic but unavoidable accidents" any more.

Asimov's stories weren't about real robots, anyway, they were philosophy of mind and ethical speculation. He could have written them about golems, or clones, or any entity created by humans.
 
 
STOATIE LIEKS CHOCOLATE MILK
01:26 / 08.11.02
I think this may work better in the Lab...

Personally, I think that seeing as how technology is usually developed for primarily military purposes anyway, IA's AI laws (clever wording, no?) will never be put into effect, let alone adhered to. Sad.
 
 
Less searchable M0rd4nt
10:48 / 10.11.02
There is indeed "research into robots that think like humans". There's one or two projects using a sort of ground-up approach: try and make something a bit like a human infant, refine it to be more like a human infant, try and get it to mature like a human infant, and so forth.

That's not the same as actually having a robot that thinks like a human. They don't have a human; not even a fairly dim human. What they have is... well, beetles. Not even very clever beetles.

There's even some debate as to whether one could ever create true artificial intelligence, although I'm inclined to think that it will happen eventually.

What is absolutely certain is that weapons and surveillance will get smarter and smarter, and that our governments will deploy these technologies in an attepmt to control us. Just like now, only with more knobs on.
 
 
bjacques
06:04 / 11.11.02
Yeah, robots designed for specific purposes will always be only smart prostheses, and their apotheosis will come when they function as well as a third eye or a sixth finger, or would if we'd been born with those.

Speaking of robot cops administering beat-downs...remember THX-1138? Robert Duvall channels surfs until he sees two robot cops beating up a black guy, a la Rodney King. Then there's the robot goon policing the underground 1901 Kansas world in A Boy And His Dog.
 
  
Add Your Reply