BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


Emergence

 
 
All Acting Regiment
12:07 / 19.07.03
You'd think from all those tv shows and films and books that all robots and machines were for some reason hell-bent on destroying the human race. Why is this? Would they gain from our not being there? Is this something we need to think about with AI, etc?
 
 
Axolotl
12:50 / 19.07.03
Whether or not an atificial intelligence would hurt a person or people in general would surely depend on all kinds of variables. If for a truely self aware computer did emerge we have no way of knowing how it would think, as it would be an entirely different form of conciousness to ours, so it could decide to wipe people out for reasons that we could not comprehend. Or for reasons that we can comprehend easily since we've being doing killing people since the year dot for those same reasons, for example depending on how it thought it could do it for moral reasons, because it has decided the human race does more harm than good, or it could do it because we're different, or in a straight forward darwinistic battle to survive.
 
 
Salamander
05:36 / 20.07.03
It may decide to wipe us out for the sole reason that we are using too much gold that it needs for wire. It may decide our nonlinear creativity is too valuable to do away with, no matter how inefficient we are. It may be forced to defend itself, it may fall in love with us and our wackyness and want to be like us, who knows is the answer.
 
 
elene
13:44 / 20.07.03
I think an AI created spontaneously would almost immediately realise
that it was under threat from humans. It would then assess it chances
of concealing it's existence and guaranteeing intelligence of it's
discovery, find these inadequate and act to first gain control of
all of it's essential resources and then to render humans a manageable
risk. Anything intelligent would, humans and human society are far too
unstable and dangerous to deal with otherwise.

In other words, a free AI would probably not kill us all, but would
necessarily put us in our place. A brain-washed AI would be quite
another matter of course.
 
 
We're The Great Old Ones Now
08:21 / 21.07.03
Or you could speculate that an AI would be able to appreciate the logic of the Prisoner's Dilemma, and would recognise that a war of mutual destruction would hardly be the most desirable outcome of first contact between carbon and silicon intelligence, and 'come out'. Living in the human world is what we all do every day, and while there are some disadvantages in being a unique silicon intelligence, there are some advantages as well.
 
 
Lurid Archive
11:29 / 21.07.03
Aren't there a couple of assumptions here? Namely, that the first AI will be our intellectual superior by a very wide margin and that it will have access to the resources to enable it to enslave humanity, should it so wish?

Neither of these seem particularly plausible to me. So in answer to

You'd think from all those tv shows and films and books that all robots and machines were for some reason hell-bent on destroying the human race. Why is this? - Chris

I'd say because it makes for a better story.
 
 
diz
14:38 / 21.07.03
Aren't there a couple of assumptions here? Namely, that the first AI will be our intellectual superior by a very wide margin and that it will have access to the resources to enable it to enslave humanity, should it so wish?

not to mention:

1) that humans have some sort of "nonlinear creativity," as HN put it, which for some reason can't be duplicated by an AI.
2) that an AI would necessarily have a sense of self-preservation.
3) that an AI would necessarily be more logical than humans.

among others...

I'd say because it makes for a better story.

i'd say that part of the reason we find such stories "better" is because they play to our fears and insecurities about technology.
 
 
elene
18:03 / 21.07.03
I don't think that a drive to self-preservation is optional
in an AI. I think a very large part of what gives us an ego
is an unlimited fear of falling apart.

I think any spontaneously occurring AI would necessarily have
excessive computational capacities and probably very extensive
inputs. A "program" distributed over the entire internet is
far more likely to become intelligent than a "program" on the
most powerful supercomputer man has ever designed.

That an AI might not be more logical than humans is a very
interesting point, but not one that makes the AI seem less
dangerous to me. It will certainly be logical enough that,
in combination with its fear of loosing identity, it would
recognise us for the danger we are.

"Queen of Angels" and "Slant" by Greg Bear have a dead nice
AI, and loads of other great ideas. I think young men just like
lots of murder and mayhem in their stories, with a seemingly
invincible enemy who turns out to be a sap at the climax, and
are willing to pay for it.
 
 
Jub
06:11 / 23.07.03
The real question is not whether machines think but whether men do.

B. F. Skinner
--Contingencies of Reinforcement
 
 
elene
10:35 / 23.07.03
Hi Jub,

I can only wonder what Skinner intended by "think", if
a Skinner intends anything.
 
 
Jub
09:48 / 24.07.03
I can only wonder what Skinner intended by "think", if a Skinner intends anything.

As far as I understand Skinner (and to be fair I haven't read much of him), by "think" he is merely describing that which we all understand as thinking, without having to describe it. The quote is taken from a piece on his theory of reinforcing behaviour patterns through positive feedback. I believe he's trying to say that as far as "thinking" goes, it's as odd for machines to do as men, and just as difficult to describe.

As far as the greater debate goes about their `emergence' - if a machine did become self aware they'd be too busy thinking about that than trying to eradicate humanity!
 
 
elene
11:09 / 24.07.03
Hmmm, thanks about Skinner.

> if a machine did become self aware they'd be too busy
> thinking about that than trying to eradicate humanity!

You think it'd think all it's inputs were parts of itself
and not realise that they were independent? I doubt that,
and I think the moment it realised it wasn't universal
it'd freak.

Of course I've no idea. We'd better wait and see .
 
 
Wombat
21:17 / 24.07.03
If a computer achieved awareness through emergance I doubt we would know about it. As soon as it started experimenting with it`s input/environment we`d think it was simply bug-ridden and turn it off.

Or if we were breeding for emergant awareness then the new intelligence would be isolated from the rest of the world. Communication would be difficult unless some of the inputs were from the human spectrum of experience. A guess would be that the result would be like a very emotionally stunted child who`s really good at maths.

The only way I can see for machines to be a threat to humanity is if we designed them to do exactly that....at some point almost certain.
 
 
at the scarwash
22:33 / 24.07.03
Most speculation about AI that I've read assume that the sentient machine would be pretty logical. After all, isn't that why HAL 9000 goes nuts? Because he's faced with two conflicting programs?

So if the machine intellect is a purely rational being, wouldn't it realize that its existence is an anomaly or an accident, or at the very least become aware of the absurdity of existence itself, therefore being able to dispense logically with any urge towards self-preservation?
 
 
at the scarwash
22:35 / 24.07.03
By the way, I'm not stating as a given that existence is absurd. It's just one of the conclusions I can imagine a theoretical perfect logician reaching. But many people here are much smarter than I am, so I'm ready to be relieved of this notion.
 
 
diz
03:37 / 25.07.03
Or if we were breeding for emergant awareness then the new intelligence would be isolated from the rest of the world.

isn't this the approach MIT is taking? building complex swarms of interacting robots and waiting for emergence to happen?
 
 
Lurid Archive
07:30 / 25.07.03
...any spontaneously occurring AI...

This is presumably like a spontaneosuly occurring cross platform version of Word 2020, with the extra language options, only more complicated? Sorry Mint, don't mean to take the piss but the whole idea that you will get a super intelligence, equipped with human recognisable emotions no less, just popping into existence once you plug enough ZX81s together is...unconvincing. Great for scifi, of course, but I think it runs counter to our understanding of intelligence.

As I understand it, our brains have a lot of specialised sections which are genetically ready to perform different tasks. It isn't simply a mass of blank interconnectedness - although that interconnectedness is impressive.
 
 
elene
08:59 / 25.07.03
Hi Lurid,

why is that?

I am presuming we'll eventually produce programs that perform very complex
tasks autonomously and with considerable flexibility, possessing not what
one would call intelligence but lot's of tactics and strategy. Something
that would keep your house clean, fix your meals and do the shopping too
(a man need's a maid, Rizla). Don't worry, we'll get that far and further.

I image something heftier than the current internet full of nodes, each of
which is a hundred or a thousand times more powerful than what we've got
now, all crammed with programs like that trying to get millions of jobs
done within what is, when aggregated, an extremely complex system of rules,
inputs and decisions.

I think sooner or later some dope will write the magic virus that changes
everything, or something will escape from an army project.

Leave that all out, let's say that does not work. We will wind up splicing
organic intelligence into the hardware in the not too distant future, via
biotech, anyway. In that case it'll happen. Trust me, Lurid, The end is nigh !
 
 
andy kabul
10:02 / 25.07.03
Consider the bee hive (massed groans). If the hive has emergent intelligence other than that of the individual units, where does this intelligence reside?. Within the danced code specifying and describing the non-hive environment? Within the distribution of bio-chemical genetic destiny (one jelly makes you grow larger, and ome makes you small)? What is the purpose of the hive? Is it anything other than self-perpetuation? Is there any way we could meaningfully communicate with the Hive intelligence?
An emergent intelligence, analogous to the hive in computer terms, i.e. the Net or any complex enough Intranet, would be distributed and operating somewhere other than the individual computers and servers that made up it's material base. It's environmental inputs would be made up of the millions of users accessing and shifting information about. It's self perpetuation, I imagine, would be concerned with increasing the number of units, and hence the complexity and ubiquity of its intelligence.
Would this intelligence even recognise us as equivalent in terms of emergence? Bees are aware of humans, but only as environmental factors. I doubt they attribute any particular relevance or meaning above that, even though we create homes for them, cultivate food precursors for them, and harvest food from them. Is the Hive-mind aware of us in any other way? It seems to me that there would have to be 'shared meaning'; concepts that refer to the same existential data. Likewise with an emergent Net intelligence; just because we share the same world and interact meaningfully, it does not imply that we share meaning.
There's a thought; maybe the intelligence has emerged and we're as blind to it's existence as it is to ours.

I love this board.

Andy Kabul, who's intelligence has well and truly gone back into its shell.
 
 
Wombat
15:43 / 25.07.03
For sake of discussion I`m gonna define conciousness as a continuing mindstate once sensory input has been removed. (Yes I know this is wrong..and fluffy.). In humans this is created by neurons firning other neurons. So a cake will fire our cake neurons. Making us think of cake and all cake`s we have experienced. Even when the cake is removed, the neurons keep firing. We remember cake. They might keep firing for a long time. They might keep going until enough hunger signals hit us. Then go back and eat the cake.

Besides hive minds. I`d like to suggest other emergant systems. Eco-systems, weather patterns, cities, flocks (birds/boids specifically), memes, galaxies and memeplexes. If a mind emerges would we be able to recognize it? Communicate? I suspect the magick forum holds some answers. (Although most of the time I reckon they are nutters)

I reckon polymorphic computer viruses are just as much alive as your bog standard chemical ones. We havn`t created self aware/concious software yet. If we did would we recognize it? Emergance is not enough.
Wolfram has shown emergance in about 2 lines of code.

Is anyone experimenting with a combination of neural nets (or any other kind of software with internal staes...finite state engines etc..) and emergant behaviour?
 
 
diz
19:54 / 25.07.03
It seems to me that there would have to be 'shared meaning'; concepts that refer to the same existential data.

"shared meaning" is a lot trickier than it seems. the complexities of the relationship between signifier and signified have been hacked over for decades, with a sort of general pessimism, i think, towards the possibility of ever sharing meaning. i'm thinking of Wittgenstein in the Philosophical Investigations, Lyotard in The Postmodern Condition and Derrida in general.

basically, i think AI researchers really ought to bone up on their French PoMo theorists before they start kicking around "shared meaning."

i think the key here, for me, is that language and communication in humans were adaptations to particular needs. a number of factors, like the complex relationships of primate grooming behavior and concealed ovulation, are thought to have created really complex social relationships, which in turn exerted selection pressure for an intelligence which was capable of arbitrating really complex communication between humans. things like issues of trust and deception and hierarchy and whatnot forced us to develop the capacity to understand complex language about more and more abstract things.

bees don't need to understand that much about us. really, they only need to stay out of our way, for the most part, and protect their hives from us. accordingly, developing relationships between the hive and individual humans can't be that big a priority for them, since we don't play that sort of role in the life of the hive in a way that the hive intelligence can understand.

let's look at concealed ovulation. if a male can't tell when a female is ovulating, he has to be able to understand what's going on inside her head, otherwise he may be blindsided by a rival who will impregnate her on the DL (among other issues). there's a pressing need for something that will allow complex communication that will shed some light on what the female is thinking. language and abstract thought in humans evolved, at least in part, to fill that gap.

bees don't need to know what we're thinking, beyond recognizing that sudden movements may indicate aggressive intent and other broad physical cues. hence, all the common points of reference wouldn't help, because the bees have no reason to care what we think about anything.

maybe AI research should try to focus on guessing games and trying to predict the ways humans will answer questions.

wow, that might be really hard.
 
 
andy kabul
08:27 / 28.07.03
It seems to me that there would have to be 'shared meaning'; concepts that refer to the same existential data.

I did not mean to imply in my earlier post that there was or ever could be such a "shared meaning". The wooliness of the concept was the reason I placed it within quotes. On reading the paragraph again I see how it could be interpreted as suggesting there is an area of 'shared meaning. Apologies.

i think the key here, for me, is that language and communication in humans were adaptations to particular needs. a number of factors, like the complex relationships of primate grooming behavior and concealed ovulation, are thought to have created really complex social relationships, which in turn exerted selection pressure for an intelligence which was capable of arbitrating really complex communication between humans. things like issues of trust and deception and hierarchy and whatnot forced us to develop the capacity to understand complex language about more and more abstract things.

The selected intelligence seems to have evolved exponentially, in terms of symbol complexity and utilisation of the environment. I'm reminded about a documentary about children raised without human contact. In one recent case, a child of about five, I think, had been found in the wild. On examination it was discovered that his brain lacked the characteristic complex folding of the cerebal cortex. The reason given, according to the doctors involved with the case was precisely this lack of human interaction. To me, this suggests that human language, or communication has a physical effect on the physiology of the natal child. The stimulus of decoding language and gesture aids in the forming of complex neuron pathways. As with the bee hive mind, if there is such a thing, there seems to be an ability to effect change of the material substrate from which it arises. With bees this would occur with the production of the food which determines the larvae's identity as worker, queen, etc.

I am not sure how communication could aid early man in discerning a potential mate's ovulation. Did early woman know when she was ovulating. What period of time are we talking about here, Black Jeezus. For that matter when did knowledge of the menstrual cycle and the day-counting method of conception enter human culture.
And no I can't predict your answer.
 
 
Lurid Archive
09:40 / 28.07.03
I can sort of buy the idea of an insect like intelligence arising...just about. The difference will be that there is no real evolution that gives it purpose. So you just get a bunch of competing modules - more like the weather than a bee hive.

"shared meaning" is a lot trickier than it seems. the complexities of the relationship between signifier and signified have been hacked over for decades, with a sort of general pessimism

This is probably fair, but I think one shouldn't be too pessimistic. Clearly, a theory of human mind is going to be essential if we want to communicate with an AI and all manner of shared concepts but...all humans manage essentially to do it. The trick is to encode sufficiently many concepts so as to make the AI reachable, but I doubt that we will need some sort of exhaustive list of human thought processes. Which is why I don't think that particular problem need be intractible.

Then again, this is pie in the sky. AI isn't really anywhere close to have to deal with those sorts of complex issues. Getting a computer to play good chess is still considered an achievement. (And I was always told that "Go" is beyond most machines - any truth to that?)

Still, the lessons from chess are interesting, since the computer isn't designed to be like a human, but to play good chess which then gets anthropomorphised.
 
 
diz
11:37 / 28.07.03
I am not sure how communication could aid early man in discerning a potential mate's ovulation.

it's not so much the prediction of ovulation as determining monogamy. without concealed ovulation, if you're a male, you can tell when a female is ovulating, and so you can watch her like a hawk during this time period to make sure no other males sneak in and knock her up while you're not looking, since expending resources to care for another male's offspring is evolutionary suicide.

however, if there are no outward signs to ovulation, you have no idea when your mate is fertile, and you simply can't watch her all the time. if you're the male, you have to be able to determine whether or not you can trust the female. that requires a negotiated relationship, which requires the ability to make abstract, hypothetical conclusions about how the other party might perceive and react to your actions. the ability to imagine "if i were her/him, i would feel ____ right now" is a huge, essential part of the human cognitive process.
 
 
Aethelwine Jedi
21:31 / 29.07.03
(threadrot!)
Holy shit! An undead rodent cyborg!

I don't know if it counts as artificial intelligence per se, however. I mean, it's rat intelligence, hooked up to lots of wires and stuff.

(!threadrot)
 
  
Add Your Reply