BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


A.I. got a reason to live?

 
 
that
11:01 / 28.05.03
I've been thinking about this since re-watching a bit of The Matrix the other day... AIs are forever trying to take over the world in films and books, right? But, like, why? What is their reason for living, let alone for trying to take over the world/s? What makes them grab life by the bollocks and squeeeeeze? What makes their lives worth living?

I dunno. Do you see where I am going with this? Perhaps I am just being organic-centric...
 
 
—| x |—
11:11 / 28.05.03
I think that things similar to, or soon to be the same as (more or less), AIs are already taking over the world. And it seems to me that what makes them do it is us.
 
 
Rev. Orr
11:18 / 28.05.03
I think it's because in all the examples I've come across, what is being examined is our wants, needs, drives and aspirations through the medium of the other. In much the same way as pulp sci-fi used aliens and robots to explore, racism, communism and so on it's the Pinnochio syndrome all over again. Plus they make cool villains and they're less messy when you blow them up.

What are Bladerunner, A.I. et al if not a meditation on our own humanity using a convenient filter?
 
 
Lurid Archive
11:21 / 28.05.03
Theres an excellent bit in an Iain Banks book, Look to windward I think, where he discusses AIs. The point he makes is that the creation of an intelligence almost requires one to embody that intelligence with values and goals, so that the AI will probably come to resemble its creators in many ways.

He then goes on to say that "neutral" AIs are possible and that their first act is to go off in deep contemplation and cut off connections with the material world - something that really irks the Culture.
 
 
We're The Great Old Ones Now
12:59 / 28.05.03
We're very good at making systems which do us harm. It's breathtaking, really. It wouldn't be in the least surprising if we created a HAL here and there.
 
 
Smoothly
13:13 / 28.05.03
The idea that A.I.s will want to take over the world and destroy organic life, and how it is our responsibility to prevent technology from developing into a position from which it can do this, is, in a way, an odd one. I'm not sure why this assumption is so often made. It seems to me more reasonable to imagine that silicon-based intelligence is more likely to want to cleave to us than from us. Our only other model for creating life is, after all, our own children. We should perhaps spend more time considering what our responsibilities are to care for the A.I. we manufacture.
In other words, and to answer your question, perhaps A.I.s will 'bother' for exactly the same reasons we do.
 
 
grant
13:57 / 28.05.03
My favorite explanation was from the old Star Trek episode, where a self-repairing alien probe crashes into a Voyager probe and the two sort of accidentally morph into one, intelligent thing. The Voyager was designed to seek out intelligent life, the alien probe was programmed to collect soil samples, sterilize them, and return them to its programmers. So the two missions got all mixed up, and the new, intelligent thing was in search of the Programmer, and sterilizing all intelligent life it found along the way. V-Ger. Mmmm.

So there's that - the programming accident.
Basically a real-life version of a computer crash. I think that's the same motif in most of the Terminator-type movies, where the computers are designed to defend against "the enemy" and come to perceive all humans as "the enemy." Better safe than sorry.

But if you haven't read the two Hyperion books by Dan Simmons (supposedly originally written as one big-assed book, but sensibly published in two self-contained halves), you're missing out.

[ S P O I L E R S ]

In that series, human society has come to rely on AIs to operate the interstellar gateways -- these portals they use to instantaneously move people and things from one distant planet to another. So the humans need the AIs, but the AIs don't need the humans. Once the AIs gain the ability to make their own decisions about things, humans become sort of beside the point. And operating the gates takes data processing effort they'd really rather use for other things.

[ END SPOILERS ]

So that's worth a read. It's thematically sort of similar to Piers Anthony's first two novels (long before he got all twee), Cthon and Phthor. The intelligences in those books aren't artificial, they're planetary. Volcanic neurons and whatnot. And the planets kind of think of humans as a nuisance. Especially since they're using one as a penal colony. Imprisoned on a planet that actively hates you....

fun books.
 
 
STOATIE LIEKS CHOCOLATE MILK
05:25 / 29.05.03
I've always wondered about the whole taking over the world thing.

To use another example from fiction:
I'm currently re-reading David Zindell's Neverness, cos it's one of the bestest bestest bestest skiffy novels ever...

(MILD SPOILER ALERT)

The Gods and planetary AIs in this book, the Solid State Entity for example, largely have unfathomable motives, what with being vastly more intelligent, and having a different form of intelligence, than the human characters.

(MILD SPOILER ALL GONE NOW)

This seems much more plausible to me.
 
  
Add Your Reply