BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


AI: Are the machines going to win?

 
  

Page: (1)2

 
 
Lurid Archive
09:40 / 27.05.02
Instead of rotting other threads, I wanted to start a discussion on the possibility of creating an artificial intelligence comparable to our own. Personally, I believe that it will be possible - I don't think that the refutations based on Goedel's Theorem hold any water - but I also believe we are a long way from it.

IMO, the claims of the industry are much overstated. (I just read a quote from someone in AI saying that we will have AI that is 200,000 times more intelligent than us in 10 years.)

But I'd like to hear what you all think. I know about the idea of using evolutionary ideas to create an intelligence, but while that seems sound in principle I remain to be convinced that we could presently make it work.
 
 
Baba Yaga
13:24 / 27.05.02
When I was doing research into this field, I was told that the only reason that we don't have computers like the ones shown in movies which are sentient, is because our computers can't think that fast. Once they can start processing information that quickly then it is possible. However scientists have promptly started working on how to create this "quantum" chip which will allow comps to work that fast.

One professor worked out that if we succedd, comps will become sentient in the same year that SKYNET (is that right) became sentient in the finished Terminator script!.
 
 
Our Lady of The Two Towers
13:38 / 27.05.02
I thought that was supposed to be 1998, or was that when HAL went on-line?
 
 
Baba Yaga
14:35 / 27.05.02
If I remember correctly the Terminator date is 20-- something. The annoying thing is that there are different versions of the script online which all give different dates so the only way to be sure is to watch the movie again.
 
 
netbanshee
14:49 / 27.05.02
It certainly seems to be a technology thing...smaller, faster, etc. when it comes to being able to have systems that rival our own abilities. Having that raw computation power that lies in nano and molecular tech will certainly provide the horsepower yet I'm curious to see what the best silicon could do.

I'm sure better routines and programming featuring response to feedback will also make quick strides. Some of the financial systems out there already perform on par or better than humans on investing and predicting trends. The algorithms used for AI today seem to be just simple tasks coupled together requiring lots of processing, but what happens when the routines are fully optimized...shouldn't they individually be able to solve problems better than we? Now couple these processes together.

Also being cost effective and feasible is important...the fax machine was invented in 1929 (almost positive) but how long did it take to be put in use? I think it's hard to predict since one strong change in technology change can seriously effect anything...let alone the social arena. But if it does arrive in ten years, it'll hopefully be a nice surprise.
 
 
Lurid Archive
15:34 / 27.05.02
When I was doing research into this field, I was told that the only reason that we don't have computers like the ones shown in movies which are sentient, is because our computers can't think that fast

The implication of that statement is that we have a software model for intelligence and simply lack the hardware to effect it in a reasonable length of time. If this is true, it has completely passed me by and I'd be interested to see some articles about it.

Also, perhaps we have some disageement on what constitutes inelligence. For instance, I have a calculator that can add up much faster than me. Is the calculator intelligent? I think we'd agree to a resounding "no". What about current computer programs like the stockmarket prediction stuff or the chess computer Deep Blue. Are they intelligent?

That starts to get harder to say. I could say that anything these machines do is an essentially deterministic process, but I reckon this is insufficient. Its more to do with the notion of adaptability that characterises intelligence.

For instance, animals are able to cope with a wide range of situations in order to survive. So far, the applications I've seen of AI rely on responding to specific tests envisaged by the programmers. With greater computing power one can imagine that the range of these tests will continue to increase. I remain to be convinced that this approach leads to true intelligence - whatever that means.
 
 
Hieronymus
19:20 / 27.05.02
Gotta agree with Lurid on the overstatement of the progress. Books like Kurzweil's, while penned from an inventive genius, are presumptively optimistic. Almost frighteningly so as he completely ignores the darker aspects of the potential for this technology in lieu of visions of grandeur. Like all utopian treatises, it tends to oversimplify. AI that can pass the Turing Test, like the type he envisions, the meaty stuff of science fiction, is a good long way off. A 10-20 year window is a projection beyond absurd. There's a whole miasma of blanks that need to be filled before we can start patting ourselves on the back. And beating a human being in chess doesn't constitute intelligence by a longshot.
 
 
Elijah, Freelance Rabbi
12:42 / 01.06.02
according to sarah conner, Judgement Day occurs in august of 97 i think, was going to throw a party
HAL went online in 98 i believe

as far as AI goes, its not processor speed thats really the problem.
Technically my pc now 'knows" more stuff than i do, but it cant access the info at the same time, the processor is plenty fast but the HD moves slowly in comparison.

read something a while ago about a design for a terrabyte (sp?) storage that was designed spherically with all the data recorded on the inside so the longs point between to bits of data was a constant. I'm likely saying that wrong...

but anyway, if you had mass storage in a medium that would access quickly (FLASH style memory for a dig camera works better than a hard drive since there is no wait for moving parts) the processors of today would likely be able to keep up
 
 
cusm
04:19 / 04.06.02
Modeling human like intelligence is easy. Take a neural net, feed its output into its input, and give it a steady supply of incoming data to process. However, keeping it from instant madness is the tricky part, as is running one big and fast enough to do the job. Its also the training routine of the net that is the issue. Bio brains use a comples system of chemicals to change the weights between nodes on the neural net to stimulate creativity and handle the actual training of the net. Training algorythms are the tricky part to working with neural nets. Ideally, a sentient net would first have to be taught how to write its own trainer program, and then feed itself to itself. Then, give it a job to do so it can optimize itself accordingly. In the optimizing of its optimization program ad recursively, it can in theory bootstrap itself into sentience. In theory. It all looks a lot easier on paper.

We're more likely to see models of human cognition rather than the real thing for quite awhile. That is, linear rather than recursive neural net processing, trained to behave and learn according to the input given. That's just fine, really. If we're going to train superintelligent artificial neural networked processing machines, I'd rather that just don't have the opportunity to consider thinking for themselves, thanks. What we'll see first is simply highly intelligent expert systems, some of which may be running on neural net like programming as a base. They'll be able to model higher and higher levels of abstract thought as they develop, but its not quite real sentience until they can program themselves.

Still, its coming. Anything we make that allows us to think smarter, even indirectly as in computers, allows us to develop technologies faster. Hence, the theoretical compounding acceleration of advancement until the singularity is reached. That's the real bonus of the quantum computers. They might not create human like intelligence of themselves yet, but they might allow us to figure out how to do it.
 
 
Lurid Archive
10:10 / 04.06.02
In theory, your method is pretty convincing cusm. The only drawback is that, as far as I can tell and correct me if I'm wrong, we have no idea how to implement some of the stages you talk about. It may be easy to write those training programs, but no one has done it. In fact, as far as I'm aware, no one even knows how to do it in theory - abstract thought for machines seems a long way off. The problem being that it seems all to likely that any "learned" behaviour is far too simplistic to be called intelligence...

As for reaching some kind of intelligence singularity. hmmm. I remember during the internet bubble that people were predicting that all companies would be virtual by 2050 - you simply extrapolate...
 
 
kid coagulant
18:08 / 04.06.02
Interview w/ Rodney Brooks about robotics/AI in the new scientist:

http://www.newscientist.com/opinion/opinterview.jsp?id=ns23455

Interesting quote here: 'One of the hypotheses for this new stuff is that it is beyond "computation" in some sense. I think we've become terrible computational servants over the past few years. Everything is just computation and I look at some of my neuroscientist colleagues with dismay because they're using information theory and computation as the main metaphors for understanding how neural systems work. I don't think neurons developed originally as computers. They developed as synchronisation mechanisms for pulsating swimming motions.'

Also, since 'terminator' and '2001' have been mentioned above, something on AI and science fiction from kurzweil's site:

http://www.kurzweilai.net/meme/frame.html?main=/articles/art0471.html

'A lot of science fiction has been exploring lately the concept of uploading consciousness as the next, and final, step in our evolution, says SF writer Robert Sawyer, who reveals the real meaning of the film 2001: the ultimate fate of biological life forms is to be replaced by their AIs. Paging Bill Joy… '

Anthropomorphism will play a factor in this...
 
 
cusm
18:43 / 04.06.02
Actually, its the trainer program that is the hard part, Lurid. anyone can build a physical neural net. You can get parts at Radio Shack and put a simple one together. Its training it that requires complex algorythms, and is still a very young science. Chaotic nets like those we use in our own wetware as opposed to linear line by line nets are an even younger study. We have very little idea how they actually work on the mathmatical level. Any text I've seen on the topic only refers to a recursive net as an "unstable state".

I just think they're fascinating things, as they can store and process information in ways we are simply unable to understand yet. But we can still use them, even if we don't know entirely how they work, which is the exciting part. The potential is to create a machine that can think of solutions to problems but in ways that we can't understand how it did it. That if nothing else is the real danger, actually. If we abstract the thinking process of invention further and further from our own levels of understanding, we loose control over it.
 
 
Lurid Archive
19:05 / 04.06.02
Perhaps I was being obscure in my post, but I was agreeing that the meat of the problem is in the training program.

I did read somewhere that people are doing this, but IIRC their aims are pretty limited. Just as I'm not planning on a career in Starfleet to explore the Universe, so I'm not getting worried about AI's taking over the world.

As to the maths of neural nets - chaotic or otherwise - I thought it was all understood on some level. No one knows how to program them perhaps, but the computations you can get out are the same as with a bog standard computer - Turing computability. Thing is, I don't know anything about it really so correct me if I'm wrong.

But I thought that one of the amazing things about computers was that the Turing machine, "devised" in 1936 or so, is as powerful as you can get in some dodgy absolute sense.
 
 
cusm
20:59 / 04.06.02
Oh, they're used alright. We can train linear nets. Its the recursive chaotic sort we can't deal with yet. I only harp about those because a recursive neural processor wouldn't pass a Turing test because it emulated consciousness. It'd pass because it was conscious. There's nothing artificial about that sort of intelligence, provided anyone ever gets one to work without it immediately going mad.

Neural nets really are their own kind of beast. With a digital computer, you program the circuit, then give input to receive an expected output. With a neural net, you give the input and expected output, and then train the circuit to act as expected. Once its trained, you can give it different input which it'll process in a similar manner, to give related output. You can use them to model how one set of data may behave based on the way another set of data behaves. They let you make educated guesses and projections. They're not so useful for basic math. They process in a completely different way than digital computers, which is why they're an item of interest.

Quantum processers, by the way, would also process in a completely different way. Thus, the excitement over them. Now I don't mean a digital computer based on quantum bits. A Quantum processer would include an indeterminate state: {yes, no, maybe} rather than just {yes, no}. This means they can guess. what's more, processing on the quantum level means they could attempt every possible solution to a problem at once, as the unknown state is all possible states. The probability field collapses when the solution is found, triggering an observation. If they work as advertised, it means 4th dimentional parallel processing, and a completely new way to solve problems.

Anyway, back to neurals emulating human thinking, Dr. Thaler's Creativity Machine, in fact, can create new things based on old data. It models human creativity by introducing noise into the network, much as an organic system is noisy. The result is variance in the output, which when combined with another net to select desired results allows it to do things like write songs. Creepy, but a major breakthrough in the understanding of how these beasts work.
 
 
Lurid Archive
22:08 / 04.06.02
Am I wrong in thinking there is some confusion between the notions of the Turing test and Turing computability?

The former is a test whereby a computer tries to hold a (possibly text) conversation whilst posing as a human. The latter is a theoretical limit on the capabilities of computers. Clearly, different models of computing favour certain programming techniques and methodologies. The point I was making is that - as far as I'm aware - no one has proposed a design for a computer that could theoretically do anything more than the Turing machine (a particular model of computer) proposed by Turing back in the 30's.

So, it might be better to program in machine code than basic but the outputs you are able to get are the same. Church's thesis is a very strong form of that statement. For instance, you can "emulate" indeterminate states even though they are not fundamental to your hardware.

People say that quantum computing is different - although I've also heard it said that to do infinitely many computations requires measurement of infinite precision.

Still, cusm, you seem to imply that we have the knowhow to make artificial intelligences that are "mad". I haven't read through your link yet, but this would be a revelation to me.
 
 
cusm
18:42 / 05.06.02
Ok, standard nural nets have input and output lines, with rows of nodes that look a bit like this:


| | | |
* * * *
/\ /\ /\ /\
* * * * *
\/ \/ \/ \/
* * * *
/\ /\ /\ /\
* * * * *
\/ \/ \/ \/
* * * *
| | | |


That's what I mean by linear. Processing happens from the top to the bottom. Now, consider that each node is a neuron on your brain. Does your brain lay itself out in nice neat rows? No, of course not. Nodes are linked back and forth to eachother in a complex and chaotic jumble, with circuits often feeding into themselves in repeating loops. That is they kind of neural net we would have to build for true AI. Unfortunately, we really don't have much clue on how to train it. You can build one, fed it input, and it'll give you wildly unpredictable results. That's what I mean by it being mad.

There is one application we do use a limited form of this sort of net for, in computer RAM. The basic component of RAM is what's called a flip-flop. Its two nodes, each of which has two lines of input and output. Output 1 of node A goes to input 2 of node B. Output 1 of node B goes to input 2 of node A. Inputs 1 for both are external, as are the outputs. Output of a node is a produce of the two inputs. So what you have is aneat little toy that saves a state by recursively recomputing its states. What those states are depends on both the current input, and the previous input since the previous input was stored. This will probably look like shit, but here's an ASCII diagram of one:


---\
/ |
--A--|---
out---A--|--/----in
-------
/ /
--B--
out---B----------in


Basicly, you can use them to store states of 1, 0, or chaoticly flip back and forth between 1 and 0. They're neat. That's a recursive net with only 2 nodes, and they have something like 8 different states and combinations. I might be off, its been years since I took Computer Archetecture in college. For AI, we need something like that only with millions of them. We have no blinking idea how to program that yet. But physically, the design seems simple, no? That's nature for you. Take a simple idea, and build something maddingly complicated out of it.
 
 
captain piss
10:51 / 06.06.02
Nature is certainly so staggeringly complex that …perhaps we have to wonder if our existing metaphors for computation are really up to the job?
Just to explore the issue raised in the Rodney Brooks quote from invix’s post above. Any thinking that’s mired in the language of Turing machines and the computational metaphors we have at present, is surely only going to get us so far.

There was an interesting article by Kevin Kelly in Whole Earth Review a couple of years ago, discussing the fact that the processes of computation that we understand from digital computers have provided a very powerful template for understanding what’s happening in lots of systems, whether it be mathematics, the behaviour of galaxies or biology (evolution is often termed an ‘algorithm’, just as DNA is often termed the ‘software’ in our bodies hardware, for instance). The emergence of quantum computing has also led to physicists using the terminology of digital logic to describe atomic behaviour (summed up by physicist John Wheeler’s statement that ‘It’s are bits’. There is a quiet and possibly insidious trend towards science viewing everything in the universe as basically a computation, Kelly is saying.
Comments such as those by AI researchers like Brooks perhaps articulate a fear that this is going to be a kind of limiting thing, and a barrier, if anything, to understanding what’s actually going on in systems like the neural networks in our own head.
 
 
Lurid Archive
13:28 / 06.06.02
Having skimmed your link a bit, cusm I am a little wiser. But...

Although it sounds good in theory, I still think that the language is over-optimistic. For instance, let me tell you how to build an artificial human. Just get all the required chemicals (easy) and then combine them in the right way to get a human. Tada! So, I have no idea how to build an artificial human, since I've said nothing about the most complex step.

Its the same with this AI idea. Training neural nets sounds good, especially as it models some behaviour of our own hardware. But the fact that we don't know how to train "intelligence" or even "consciousness" means that the most important part of the problem still eludes us. In my book, that means we don't know how to do it. In fact, this is the line of argument that sceptics sometimes use to deny the possibility of AI - not a position I hold, but I know what they mean.

Actually, you can do this kind of training process with ordinary computers and Genetic Algorithms. You can implement exactly the ideas incorporated by the creativity machine. I mean, you can set a genetic algorithm to "evolve" programs which do whatever you want - intelligence, consciousness, creativity, whatever. Its easy to say it, but much harder to actually do. And bog standard algorithms demonstrate aspects of creativity - the
postmodernism generator
, for instance. With my knowledge of math, I can easily write a program to come up theorems that I have never heard of, much less imagined. I can also write a program which has embedded in it, every computer game ever invented. And also those that haven't been.

But I'm saying all this in a way that makes it sound as if there is real creativity going on. On the whole its a bunch of tricks. Perhaps intelligence is also a bunch of tricks, but you have to have an idea of how to put them together. Getting a wildly unpredictable result does not constitute "madness" for me, since there is no evidence of intelligence, consciousness etc.
 
 
Tamayyurt
15:17 / 06.06.02
we will have AI that is 200,000 times more intelligent than us in 10 years.


Why is intelligence automatically equated with evil or destructive? If anything I equate those things to stupidity and ignorance. So wouldn't an AI that's 200,000 times smarter than us be benevolent? I see two possibilities: 1)It'll take care of us the way a mature child takes care of it's old, ignorant, insane parents. 2) It'll be indifferent and completely ignore us.
 
 
w1rebaby
21:12 / 06.06.02
I wanted to start a discussion on the possibility of creating an artificial intelligence comparable to our own.

I think discussion is drifting away from this point. You can train a neural net forever but not all human intelligence results from abstract training. Our basic design determines how and what we learn from training. We can build plenty of different types of systems that "learn" (not just neural nets) but unless they learn in the same way that humans do, they're not going to think and behave like humans, and we're not all that sure about how humans learn in the first place, certainly not down to the level of being able to simulate it totally with computers.

In practice this doesn't really make that much difference. If you can build a system that flies a plane in the way that you want it to, nobody cares whether it thinks like a pilot. There's no money in producing machines that simulate more than a certain subset of human behaviour - we have humans to behave like humans, after all.

There's some evidence that the human mind is more like a collection of dedicated systems for language, movement etc, rather than there being an underlying "general learning system" or "consciousness" that produces these skills (that I consider to be a rather romantic, dualist idea). You could have an artificial entity that was similarly a collection of systems, the overall effect of which was to produce something vaguely humanish. But our systems have evolved to deal with our environment and the physical restrictions of our hardware. The environment and physicality for an AI is very different. Imagine how the human brain might have developed were it capable of networking to other brains, or reproducing not from DNA but from direct copies of its current state.

So basically I think that unless someone deliberately goes out to produce human-like intelligence, we'll not get it, and even if people do set out to do that, they're unlikely to succeed, because it's an immense project, even the limits of which are not fully understood. We may end up with machines that outperform us in all the tasks that matter, but they probably won't be anything like us at all. We may not even realise that they exist in the first place. Would we recognise a distributed "consciousness"?
 
 
Lurid Archive
23:09 / 06.06.02
Perhaps I phrased that intro badly. I don't really care if an artificial intelligence thinks like us, has similar values and so on. I'm interested in the question of whether it is possible to create an intelligence worthy of the name. If pushed, I'll probably admit that I don't really know what this "intelligence" is, but I'm hoping that no one will bring that up.

But I was very interested in reading,

There's some evidence that the human mind is more like a collection of dedicated systems for language, movement etc, rather than there being an underlying "general learning system" or "consciousness" that produces these skills - fridgemagnet

Surely that isn't right? Doesn't the flexibility of humans mean that there must be more than dedicated sytems? How do abstract thought, art, science, philosophy and religion arise solely out of dedicated systems? I'm happy to accept that the hardware is very significant - Chomsky has been a particularly strong proponent of this view for language ability. But surely, no matter how hard to define, intelligence and consciousness are real aspects of humanity? Perhaps what you mean by "dedicated systems" is rather more general and flexible than what I'm alluding to...
 
 
w1rebaby
08:58 / 07.06.02
It's a debated point, but there's nothing to say that what we consider "abstract" thought is not just the results of a few interacting systems. Just because "religion", say, is not directly the result of perception, does not mean that it's not purely a human-dependent artifact, and nothing abstract at all. An abstract reasoning system from a different source would not necessarily come to the same conclusions. A lot of people have theorised that art and creativity are in fact mutant mating displays. We have complex minds which are going to produce a lot of emergent behaviour that's not immediately obvious.

Language is a common example - our language systems are to an extent hardwired, and we can perceive them as being essential to our abstract thought processes (again, arguable). Perhaps abstract thought results from basic language routines of which we are not consciously aware, and possibly could not be.

I don't believe that our thought is as "abstract" as some people do. It suits us to believe that that's what makes us different and, for some people, "superior" to other sentients but it's such a fuzzy concept. We would find it very difficult to actually recognise the limits of how we are able to think, and would simply define anything outside those limits as not being real thought.

One of the problems with consciousness study is that there isn't an identifiable area of the brain that's responsible for "abstract thought" or "consciousness" in the same way as there is for language or whatever.

I'd quote the paper I was thinking of in the last post, but, er, I read it a few years ago as part of a Natural Language Processing course I think, and I don't have my notes any more. It wasn't specifically an AI paper, it was more neuroscience / psychology. If there's any psychologists around they might be able to point you in the right direction. It was very compelling as to the existence of different parallel levels of mental function that can be entirely unaware of each other, though, and gave many examples of how, if there's an abstract level controlling all of these things, it doesn't work.
 
 
Lurid Archive
11:39 / 07.06.02
But it may well be that intelligence and consciousness are emergent properties rather than located in specific centres of the brain. As far as I can work out, this seems likely and makes it all the harder to define the concepts. Now am I agreeing with you by saying this or disagreeing, fridge?

The problem with saying that saying its all "dedicated systems" is that it sweeps too much under the carpet. For instance, if I were a Dawkins-flavoured reductionist, I might say that life is just a dedicated system whose purpose is to propogate genes. What we see as intelligence is just a by-product of that.

In terms of AI, though, I'm not sure that is very helpful. Perhaps I'm wrong, but I think that calling something a dedicated system implies a specific and limited use and range of behaviours. It brings to my mind, clever but narrow solutions to engineering problems.

So to say that humans are just made up of dedicated systems for language, survival, propogation, movement etc puts too little emphasis on the fact that each of these things can be performed in lots of different, dare I say imaginative, ways. There is a difference between an camera tied to a computer algorithm that attempts to recognise faces and the range of social interactions humans engage in that involve recognition.

To put it simply. We do things in flexible and imaginative ways that I've yet to see emulated, with more than limited success, artificially.
 
 
grant
13:33 / 07.06.02
Is this the kind of thing you're talking about when you say "training"?

Just over one year ago saw the birth of baby Hal, a computer program named after the intelligent computer in 2001: A Space Odyssey.

The person Hal calls "mummy" is Anat Treister-Goren, a neurolinguist who has been training Hal to take his first steps in language acquisition. The two talk on a daily basis, sometimes for hours at a time.

Ms Treister-Goren guides Hal through a virtual reality made up of typical examples from a child's world, like playing with a ball or visits to the zoo.

Hal learns to communicate correctly through a system of reward and punishment.

Wrong responses, which Hal has been programmed to avoid, are highlighted on the keyboard by Ms Treister-Goren.

Correct responses are praised and nurtured. To the delight of all those working on the project, Hal has already passed a test in which it fooled experts into believing it was a human - an adaptation of the famous Turing test for the equivalent of a 15-month-old child.

To date, the computer's language skills mirror those of an 18-month-old toddler.

Hal's vocabulary has now grown to an impressive block of words and he is capable of stringing words into intelligible phrases.

A language expert who recently examined transcripts of conversations between Ms Treister-Goren and Hal concluded that the program displayed all the normal trappings of an 18-month-old child.

 
 
w1rebaby
13:54 / 07.06.02
But it may well be that intelligence and consciousness are emergent properties rather than located in specific centres of the brain. As far as I can work out, this seems likely and makes it all the harder to define the concepts. Now am I agreeing with you by saying this or disagreeing, fridge?

you're agreeing with me

So to say that humans are just made up of dedicated systems for language, survival, propogation, movement etc puts too little emphasis on the fact that each of these things can be performed in lots of different, dare I say imaginative, ways. There is a difference between an camera tied to a computer algorithm that attempts to recognise faces and the range of social interactions humans engage in that involve recognition.

I agree that there can be a bit of a mechanistic "it's all chemicals" attitude to some of the debate, but I think that's more about some commentators' misanthropy than anything else...

I think it's important to appreciate the interaction of our various systems as being what actually makes us what we are. To take your example, the basic process of face recognition in humans is extremely mechanistic and mostly unconscious, yet those recognitions then feed into a lot of other things to do with visual perception, memory etc etc, which also feed back to them and so on. What we call "recognising a face" is more than just a lookup function.

Or, alternatively, there are completely unconscious features of language recognition (e.g. the fact that we are far more sensitive to our own names than other people's seems to be a sort of pre-processing feature before we even consciously hear a word) but there's obviously far more to how we deal with language.

I guess my point is, there are so many different systems of different levels and with different goals interacting, the nature of and the connections between which we don't really understand, that that makes it much less likely that we can get an artificial system to work in the same way.

The idea that there is a seat of consciousness or reasoning engine that generates all these behaviours, in a top-down way, is inaccurate. It's not just a question of building a system, plugging in the I/O systems, letting it learn away, and that will result in anything like human behaviour (that would be too anthropocentric an attitude to take). It's all much more of a fudge than that.
 
 
Lurid Archive
13:56 / 07.06.02
Yep. Thats exactly it. I read about this in New Scientist a few months back but I can't remember if they are using neural nets or not, and can't get the link to work. Hmmm. They must be.

The obvious criticism is that there is a lot of leeway in the speech patterns of an 18-month child. The postmodernism generator, suitably adapted, could probably do the same. The question is how far they can push this...
 
 
Lurid Archive
13:57 / 07.06.02
above directed to grant's post and link.
 
 
w1rebaby
14:02 / 07.06.02
um, mimicking the language skills of an 18 month old child is not that similar to mimicking the language skills of an adult. It's a very limited vocabulary and set of subject matters. Just because it can do that doesn't mean it will "grow up" and in x years time be able to pass a real Turing test.
 
 
Funktion
05:57 / 19.02.03
justfridgemagnet,

I think discussion is drifting away from this point. You can train a neural net forever but not all human intelligence results from abstract training. Our basic design determines how and what we learn from training. We can build plenty of different types of systems that "learn" (not just neural nets) but unless they learn in the same way that humans do, they're not going to think and behave like humans, and we're not all that sure about how humans learn in the first place, certainly not down to the level of being able to simulate it totally with computers.
So basically I think that unless someone deliberately goes out to produce human-like intelligence, we'll not get it, and even if people do set out to do that, they're unlikely to succeed, because it's an immense project, even the limits of which are not fully understood. We may end up with machines that outperform us in all the tasks that matter, but they probably won't be anything like us at all. We may not even realise that they exist in the first place. Would we recognise a distributed "consciousness"?


I wanted to add further support to this excellent point above. JustFridgemagnet truly captures the essence of a perspective on AI that I think far too many people like Mr. New Science Wolfram don't quite grok somehow...

Computer intelligence may very well reach some incredible levels like Kurzweil mentions in Age of Spiritual Intelligence, but the key is that it wont be like human intelligence. Human minds are the unique result from our biological condition.
Too many people are caught up in looking at consciousness as some sort of yes or no question when in reality consciousness comes in a wide range of degrees.
A system not capable of doing what the Lymphatic and Immune systems of humans couldn't truly be equivalent.
 
 
Quantum
13:11 / 19.02.03
Many Scientists subscribe to the Brain=Hardware Mind=Software view of intelligence, which is a function of digital computers. It's a useful metaphor, but we could build a digital computer the size of Canada and it wouldn't be Intelligent- it would just do as it's told (very quickly). Neural networks functionally mimic the brain and so are a step in the right direction, but still, the main obstacle to creating AI isn't our computing knowledge but our understanding of our own consciousness. Psychology, Neuroscience, Philosophy etc. have a long way to go before we can produce AI. (sidestepping the question of 'Intelligence'- I was taught there is no such thing, or at least it is undefinable. We can easily say Consciousness instead, so it's a pedantic point to make everybody say 'AC'- lets stick with AI for ease)
Searle (AI philosopher) has a couple of very apt points. He posits Consciousness as an emergent property (as Lurid and Fridge seem to be saying) and uses the metaphor of water. An H2O molecule is not wet, but a pool of water is wet- the system has properties the elements don't (hope that's what L. + F. were saying) so even if we can explain every part of the brain we can't explain the whole brain- reductionist techniques are insufficient.
He also uses the Chinese Room thought experiment to show that understanding (and thus consciousness) is more than just information processing. In brief, you are put in a room with an input door and an output door, and are given a manual on how to manipulate Chinese symbols according to Chinese grammar (you don't speak Chinese) Symbols come in, you manipulate them according to the manual and put them out. To an external, Chinese observer the room might appear sentient (a la Turing test) and able to hold a chinese conversation but you yourself can attest you don't understand chinese and are simply manipulating symbols. There is more to consciousness than information processing.
Additionally there is the problem of Qualia and other minds. I am experiencing stuff right now (screen of words), which can be reduced to experiential qualities (e.g. shapes and colours) or qualia, which are irreducible to anything else- they are primary elements of perception, distinct from brain states and neural activity. Can a computer experience these qualia? I believe other people are conscious and experience things, because they are like me (and I know I do) and they behave as though they do- but they could be cleverly devised robots, or I could be a brain in a jar being deceived (e.g. The Matrix) and they may not be there at all. If it is difficult to ascertain that other people are conscious, how much more difficult to prove a machine is, which is not like me? The Turing test is not sufficient (see Chinese Room) because it doesn't distinguish between intelligence and simulated intelligence. If we did develop true AI how would we know?
Interestingly I heard recently it would be impossible to to develop intelligence without developing emotions- feeling seems to be an emergent property of thinking.
 
 
Quantum
13:22 / 19.02.03
To elaborate on consciousness comes in a wide range of degrees (Funktion) Is a cat conscious? A rat? A spider? A Microbe? Difficult to answer I think. I agree we may not recognise AI as being like us, in the same way we might not recognise aliens as intelligent- consciousness may come in a wide range of styles- there may be qualitatively different intelligences. Would we consider them intelligent? I think I am agreeing with Funktion if I say we are subject to anthropic prejudice- when we say 'intelligent' we might just mean 'like us humans'.
 
 
cusm
20:22 / 19.02.03
If simulated intelligence can be accomplished to the degree that it can't be differentiated from the Real Thing, and its capabilities for adapting and solving problems are as good as genuine intelligence systems, does the difference between real and simulated intelligence really matter at all? Seems a bit like arguing semantics and missing the point to me. Granted, I approach things from a results-oriented direction.
 
 
w1rebaby
15:26 / 20.02.03
cusm: yes, I agree. I also come from that perspective. In practice, I think people will apply the duck test anyway - if it seems conscious, acts like it's conscious, they'll relate to it as if it's conscious. Given how people anthropomorphise everything they come across anyway given any sign at all (my printer hates me, my cat is clever) I'd be very surprised if they didn't.

qalyn: Argh, Searle. I hate Searle, mostly because of his incredibly arrogant punchworthy attitude in interviews and lectures. But I also don't like the Chinese Room. The obvious point to it seems to be, you are acting as a component of the Chinese Room's brain. Whether or not you understand what you're doing is irrelevant, it's the whole that counts. You don't expect the components of your own brain to be conscious.

Searle seems to me to be a mental vitalist anyway (a mentalist?) In one of his appearances he scoffed "you can't make a brain out of beer cans!" as if the strong AI position was something no intelligent person would even consider.

As I was thinking about this, I started thinking about a super Chinese Room, call it the Chinese Bureaucracy. Documents come into the CB, and are passed around between people in a seemingly random fashion according to rules. Each person adds or modifies a paragraph according to the rulebook. If you were to examine the result at any one time it would make no sense. Except, the final response makes sense. Is the CB conscious? A bit like Gibson's idea of corporations or networks as lifeforms.
 
 
8===>Q: alyn
22:50 / 20.02.03
qalyn: Argh, Searle. I hate Searle,

Just for the record, that wasn't me.
 
 
w1rebaby
22:53 / 20.02.03
doh, sorry, these Q words are very confusing to me
 
  

Page: (1)2

 
  
Add Your Reply