BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


AI: Are the machines going to win?

 
  

Page: 1(2)

 
 
Mirror
18:59 / 05.03.03
I recently read an interesting bit of fiction on the subject of AI that took an approach I haven't seen before; if you have an all-powerful AI that were to genuinely abide by Asimov's three laws of robotics, what would be the outcome?

Check it out:
The Metamorphosis of Prime Intellect
 
 
Funktion
07:24 / 08.03.03
If simulated intelligence can be accomplished to the degree that it can't be differentiated from the Real Thing, and its capabilities for adapting and solving problems are as good as genuine intelligence systems, does the difference between real and simulated intelligence really matter at all?


That's an if that IMO will never be met the way people think. Even IF artificial systems became self-aware they wouldn't be self-aware in exactly the same way as a human.

Then is always a difference on some level, the AI computer doesn't know exactly what the state of mind during break dancing feels like as that 'software program' wouldn't be applicable to it the same way it is to humans.
Or on a different level the entire system of the AI wouldn't be aware of how to fight an invading virus the same way the human body's immune system is 'aware' of it. It is one thing to be programmed with what a Helper-T Cell does, it is another to be part of a system that relies on them...
 
 
Lurid Archive
10:07 / 08.03.03
Mirror: I liked the story, cheers. Not sure that it has much to say about AI though and nor does it need to; scifi often works best without an overly close examination of the premises.

Having said that, singularity "theory" seems to me to be an uncritical appraisal of unjustified hypotheses taken to unlikely conclusions.

funktion: I think it is fairly reasonable to suppose that an AI will be shaped by its architecture. It isn't clear what the difference might be at this stage, but the only real upshot might be to broaden what we consider to be intelligence and consciousness.
 
 
Funktion
21:11 / 08.03.03
Lurid Archive

I agreee,

A mistake is to look at consciousness and intelligence in black and white, on and off terms when there is a spectrum of degrees of both consciousness and intelligence...
 
 
Quantum
11:04 / 10.03.03
Fair enough to hate Searle, but the Chinese Room has a point- a system which acts conscious is not the same as a conscious system.
I'm coming from a philosophical perspective as opposed to a results perspective of course, because I don't think the duck test is appropriate to consciousness studies. If the aim is to create something that only seems intelligent then 'true AI' becomes meaningless- spend your efforts building an android that acts human.
If you want to construct a consciousness similar to our own it has to experience things from a subjective point of view, like we do. I think the difference between real and simulated intelligence DOES matter.
For example take this thread as a Turing test situation- I attribute posters with intelligence, so I post my replies for them to experience. If I thought you were all robots or computer programs I wouldn't bother posting anything- I write to be understood, something fake AI can never do.
 
 
Lurid Archive
11:46 / 10.03.03
funktion: I was thinking more in terms of flavours rather than degrees of consciousness and intelligence. Different rather than better or worse.

quantum: You see, I just don't accept the chinese room demonstates any such thing. What it does is presents an example of intelligence that either fails the Turing test - in which case it is a bad example - or it doesn't. In the latter case we are suppose to accept that it doesn't "speak chinese" because we understand something of its mechanisms. These mechanisms are at odds with the template we have for mechanisms in chinese speakers - which are biological.

As such, I've always seen the chinese room as essentially circular. A non-biological entity cannot be conscious/intelligent/speak chinese because it has to perform these functions non-biologically.

I think that getting an android to "act human" is as hard as getting it to "think". This seems to be true from a simply pragmatic viewpoint. (The idea that you "simply" get something to act human is intriguing.)

From a philosophical one, a distinction seems to arise from a proscriptive definition of "thinking". If it isn't human, it isn't thinking. If it is "fake" AI it doesn't understand and cannot experience subjectively. I don't see any good reason to accept that.
 
 
Quantum
12:32 / 10.03.03
It's not that the mechanisms are different so it isn't conscious, the point is the person inside doesn't speak chinese- there is no understanding going on, just symbol manipulation. Symbol manipulation is not consciousness. I am happy to accept that a mechanical system that is functionally equivalent to a human brain can be conscious in the same way a human is, I have no special attachment to biology- if my consciousness could be uploaded to a sci-fi device I would still be me, and think the same way (hopefully).
I think the AI we eventually develop that thinks will think differently to us not only in degree but in quality- a different flavour as Lurid points out. The problem comes with deciding what counts as intelligent and what doesn't.
You have to remember that the Turing test is deliberately overly-harsh (many people fail it) in order to make sure that anything that passes is definitely intelligent. Problem is, a clever program that simulated intelligence could be devised to pass the test, but not be conscious. The anthropocentric view of intelligence is natural because we are the only example of intelligence we have- the word 'intelligence' almost always means human-like intelligence. Other types of intelligence should be called different things because they *are* different things.
 
 
Lurid Archive
13:12 / 10.03.03
It's not that the mechanisms are different so it isn't conscious, the point is the person inside doesn't speak chinese- there is no understanding going on, just symbol manipulation.

So what? I doubt that my ear, in isolation, "understands" english. Does that mean I don't understand english? The person inside the machine doesn't understand chinese, but the room taken as a whole does. At some level I am made up of chemical interactions, which aren't substantially different from symbolic manipulations. If we are saying that there is something more to intelligence, then we have to say what it is.

Problem is, a clever program that simulated intelligence could be devised to pass the [Turing] test, but not be conscious.

Possibly. But if one extends the test far enough then I'm not sure what you are saying. That something can be indistinguishable from a conscious, intelligent being in its behaviour yet not be conscious and intelligent? I just don't understand that. It makes intelligence and consciousness into mystical qualities akin to having a soul.

Put it another way. Are you seriously saying, as you said before, that if I - Lurid Archive - turned out to be a machine you would no longer consider me to be intelligent and/or conscious?
 
 
Our Lady of The Two Towers
13:19 / 10.03.03
What is more useful here, a computer that is as conscious as a fly, or as intelligent as a human?
 
 
Quantum
15:06 / 10.03.03
Are you saying Intelligence is an emergent property? Like wetness is an emergent property of water?
Do you really think the room understands Chinese? Just because it behaves as if it does? You seem to be equating behaviour that appears intelligent with intelligence. something can be indistinguishable from a conscious, intelligent being in its behaviour yet not be conscious and intelligent yes, I think exactly that. I can be conscious and exhibit no behaviour that illustrates that I am conscious (lying still pretending to sleep for example), a computer program could mimic human behaviour without being intelligent.
Remember Eliza (I think it was called that) the fake psychoanalysis program that mimicked a person? That wasn't conscious or intelligent, AI is aiming a lot higher isn't it?
What is more useful here, a computer that is as conscious as a fly, or as intelligent as a human? Doesn't intelligence imply consciousness?
It seems we're talking at cross purposes, you are debating the possibility of creating a system that can act like a person, admittedly a difficult endeavour, I am talking about a machine that can think like a person.
Are you seriously saying, as you said before, that if I - Lurid Archive - turned out to be a machine you would no longer consider me to be intelligent and/or conscious? No, I don't care what you look like or what you're made of, but if you turned out to be a program designed to *imitate* consciousness I would think it pointless to converse with you- I think communication of meaning demands that the recipient be capable of inferring the meaning I imply with my words. Apprehending meaning requires understanding IMO, which (again IMO) requires intelligence and/or consciousness.
If you did turn out to be a machine I would say you need to be used as evidence that machines can be not only conscious but eloquent and creative, and there's clearly top secret advanced AI research we didn't know about Are you a machine? Is this thread simply a disguised Turing test?
 
 
Lurid Archive
16:39 / 10.03.03
Do you really think the room understands Chinese? Just because it behaves as if it does? You seem to be equating behaviour that appears intelligent with intelligence

Yes. Either it understands chinese or it observably doesn't. You seem to be saying that the difference between understanding and not understanding are potentially unobservable. But, again, you might as well say that machines don't have souls.

All examples of "fake" intelligence don't appear intelligent beyond the most cursory of examinations. Being able to interact, respond and adapt aren't exhibited by current programs - like Eliza - and are therefore clearly not intelligent.

yes, I think exactly that. I can be conscious and exhibit no behaviour that illustrates that I am conscious (lying still pretending to sleep for example), a computer program could mimic human behaviour without being intelligent.

Interestingly, even if we were working in binary logic, this is a logical fallacy. You have misunderstood the contrapositive, or confused implication with equivalence. If I see you driving competently down the street I assume you can drive. If I don't see you driving down the street I make no assumptions about your ability to drive.

If I see a computer acting intelligently, I assume it is intelligent. If I do not see you acting inteliigently, I do not assume you are unintelligent. The latter says nothing about the former.

No, I don't care what you look like or what you're made of, but if you turned out to be a program designed to *imitate* consciousness I would think it pointless to converse with you- I think communication of meaning demands that the recipient be capable of inferring the meaning I imply with my words.

So you are saying that you have no way of knowing whether I am thinking, or imitating thought. Hence my intelligence cannot be deduced from any of my words or actions - say, if I invented some super scientific device or composed a stunning piece of poetry. And my point would be that you can't pretend to be intelligent any more than you can pretend to move. Which is to say that you can fake it, but only in a transparent way.

"let me show you this film of our new moving robot. It can move independently! No, those aren't strings sir. A live demonstration? No, I'm afraid it isn't here today. Perhaps another time?"
 
 
Quantum
09:01 / 11.03.03
I see what you mean. We disagree on a fundamental point- unobservable phenomena. I'm not saying there's a ghost in the machine, but I do think there are phenomena I have priveliged access to i.e. my experiences and thoughts. I want AI to have that status too, I want it to experience things and think.
The stance I take on this is directly connected to the problem of other minds. Other people act intelligently so I ascribe them intelligence, even though it's (remotely) possible they could be cunning robots or illusions. That is a pragmatic stance, I can't *definitely* say they are intelligent, I could be deceived or mistaken (there is a natural tendency to ascribe humanity to things as pointed out above). 'So you are saying that you have no way of knowing whether I am thinking, or imitating thought.' I think you are thinking, but I don't KNOW for certain. That's fine for people, there's little doubt in my mind that they are intelligent- they are very similar to me (have a brain etc), I'm intelligent, so they probably are too- fair enough. But with AI it's trickier, how do we prove something without a brain can think? Every example of intelligence so far encountered is connected to a brain.
I agree with you that AI is possible and will be created, because I am pro-AI. But I'm playing devil's advocate here a little- when you have created an AI, opponents could say that it was merely mimicking thought, not thinking. How do we answer them? Tell them to adopt a pragmatic view of intelligence (the duck test)?
(logical fallacy- yup, sorry. I was trying to give an example of objectively unobservable phenomenon, didn't realise you don't believe in such things)
 
 
Quantum
09:03 / 11.03.03
To be more concise, what I'm saying is that intelligent behaviour is not intelligence. I suspect we disagree on this.
 
 
Lurid Archive
10:09 / 11.03.03
I was trying to give an example of objectively unobservable phenomenon, didn't realise you don't believe in such things

I will accept these things in some situations. Subjective experiences and so forth. But there is a problem in objectivising these things - that is to objectively assert something about another person's subjective experiences.

Its more that I have the problem with the contention that it is impossible to decide that something is intelligent from its behaviour. So, yes, we disagree. I don't understand the distinction you are making between thinking and mimicking thought - perhaps it is valid, I just don't get it. Most of the examples seem to involve mimicking thought badly (and hence not thinking, by my book).
 
 
Lurid Archive
12:14 / 11.03.03
One question I'd like to ask you Quantum. Do you think that current computers play chess or mimick playing chess?
 
 
Quantum
14:10 / 15.03.03
They play chess, but they don't think 'Ooh, I want to take his knight' they don't have beliefs and desires.
Daniel Dennett uses them as an example of what he calls the Intentional Stance. We predict a chess computer's behaviour most effectively by treating it as an intentional system (ie having intent), because it is programmed to mimic a chess player. But we could predict it's behaviour in terms of electronics and computer programming and predict it's behaviour without imputing intelligence to it.
We treat it as if it were an intelligent being, even though we know it is not. I don't think Deep Blue is intelligent being because it is not functionally equivalent to a brain. Do you think chess computers *experience* anything at all?
 
 
Lurid Archive
16:02 / 15.03.03
But that isn't what I asked. I didn't ask whether the chess computer had feelings or desires. Do you notice that your demonstration of mimicry doesn't refer to the chess? Which has some potentially interesting implications for AI. After a stage isn't it possible that the mimicry one would need to refer to is unreachable - as it is between people? Am I "pretending" or "mimicking" consciousness? The best answer to that question, IMO, is that it doesn't matter.

But some of what you are saying doesn't hold. A chess computer is not designed to mimic a human player (or very minimally). It is designed to play chess. We could predict its behaviour in terms of electronics, but that would be woefully inefficient. Like predicting a human in terms of brain chemistry.

I'm not arguing that chess computers are intelligent - far from it. More that appeals to the artificiality of computer chess refer more to things the computer isn't even attempting to tackle - emotion, desire. And if you just deal with the computer in terms of chess then I think it is bizzare to call it mimicry. It plays chess. It isn't pretending, it really does it. Better, more imaginatively and with more depth than I do.

Thats limited, of course, but it needn't be.
 
 
w1rebaby
05:33 / 16.03.03
Coming back to this after a holiday, but: call me a hopeless strong AI proponent, but I do think a chess program is conscious of playing chess to a degree, and even Eliza "understands" what it is listening to to some degree.

If you are denying consciousness irrespective of performance, that sounds like vitalism to me, and I can't accept that. The question of "intention" I consider a semantic game, based on this vitalist assumption. Intention is possessed by conscious entities (the only definition we have is based on that expressed by admittedly conscious entities, i.e. humans) computers are not conscious, therefore they cannot possess intention. I don't think intentionality means anything at all.

Actually, "vitalism" isn't quite the right word. I'm of the opinion that consciousness is to a large part emergent - not to the level of behaviourism, but mental states can exist in different ways and still be considered equally valid as components of consciousness.
 
 
Quantum
23:46 / 16.03.03
But some of what you are saying doesn't hold. A chess computer is not designed to mimic a human player (or very minimally). It is designed to play chess (Lurid Archive)
Excuse me? Isn't chess a human game, presuming human players? A chess computer is designed to follow the rules 'play chess' i.e. behave like a person playing chess. We could programme a computer to obey any arbitrary rules we choose, chess is just a set of rules we know well and so can use as a yardstick for clever behaviour.

But there is a problem in objectivising these things - that is to objectively assert something about another person's subjective experiences. Lurid Archive
Isn't that otherwise known as the problem of other minds?

I don't think intentionality means anything at all. Fridge Magnet
What? Don't you have access to the internet?

My last post was sloppy- excuse me. For readers without a philosophical background, here is some information on intentionality and other philosophical details.

The search for artificial intelligence should be called the search for artificial intentionality. AI should be aiming at a machine that has beliefs and desires, thoughts and feelings, not a machine that acts as if it were intelligent.

I will answer the charge of Vitalism, and describe what I understand by mimicry, but must log off now...
 
 
Quantum
08:56 / 17.03.03
A digital computer does not work like a brain. One big CPU makes for a lot of very quick sequential calculations. A brain is a huge network of processors working in paralell. The Chinese room example was originally formulated to illustrate this very point I believe.

AI is not just a matter of building a *really big* digital computer, to support conscious intelligence a machine needs to be different in kind to your PC. Neural network computers are the beginnings of the technology to support intelligence (IMO) but there's a long way to go yet. I certainly don't think any computers *think* yet.

"mental states can exist in different ways and still be considered equally valid as components of consciousness." (Fridge) I agree, but I don't think current technology is capable of supporting any mental states at all. BTW sorry about the internet comment, I was irate- I consider Intentionality to be the crucial ingredient for any kind of intelligence, because...

For a system to be 'intelligent' it must use language (IMHO)
Language is intentional- it refers to things in the world


Intentionality is the 'aboutness' of thoughts in a sense, beliefs about things, thoughts about things. That relationship of mind to world is what I'm talking about when I talk about Intentionality. What thoughts don't refer to anything? Can you be intelligent without intentional states?

Mimicry- Clearly the computer plays chess. But only a behaviourist would say that is all that's going on when someone plays chess- there are corresponding mental states to the behaviour. Deep Blue *mimics* a chess player because it does not have those mental states. I'm not being circular here, Deep Blue is not made in such a way that it could support a human level intelligence. It's a digital computer.
Vitalism- there is a place for Vitalism; the Magick forum.

I want strong AI, I want an artificial intelligence comparable to our own. I believe it's possible, but not until we have technology capable of supporting it. There is a common metaphor that the brain is like a computer and the mind is like a computer program. Intelligence is not software- it's just a metaphor.

If digital computers could support consciousness wouldn't the Web be self aware by now?
 
 
Lurid Archive
10:56 / 17.03.03
Excuse me? Isn't chess a human game, presuming human players? A chess computer is designed to follow the rules 'play chess' i.e. behave like a person playing chess - Q

There is a trivial sense in which you are right, and a fundamental way in which you are wrong. Chess is a "human" game in as much as humans play it. But one can play chess without reference to humans.

I think your argument is really becoming circular here. A computer must be mimicking a human if it performs any human activity. And by human activity, we mean anything that a human does.

Hence you assume "mimicry" as a default, by dint of the agent not being human. So you have one standard for me playing chess (I'm not mimicking, because I am human) and another for computers. You are just assuming the point you wish to make.

Of course, you are also saying that "playing chess" involves having all the human emotions, desires and reactions to a chess game. So Deep Blue doesn't "play chess" because it doesn't have those. By the same token, robots do not "walk" because walking involves a set of desires which robots do not have?


Isn't that otherwise known as the problem of other minds?

Yes. My point is that the answer doesn't become easy just because one deals with a computer.

The search for artificial intelligence should be called the search for artificial intentionality. AI should be aiming at a machine that has beliefs and desires, thoughts and feelings, not a machine that acts as if it were intelligent

I don't think you can have one without the other. You cannot "pretend" to be intelligent without beliefs and desires (a key part to basic AI).

A digital computer does not work like a brain. One big CPU makes for a lot of very quick sequential calculations. A brain is a huge network of processors working in paralell. The Chinese room example was originally formulated to illustrate this very point I believe.

Right, and I don't buy it. There a thing in computing known as Church's Thesis which says that as long as your computer can do certain simple things, it can do everything any other computer can. Some people don't believe it, but there is no known counter example.

So the difference between neural nets and digital computers is one of ease of programming. They can still do the same things. The argument "it doesn't operate like a brain, therefore it cannot do what a brain does" is really an argument by incredulity. Its not accepted universally by psychologists, for instance.

If digital computers could support consciousness wouldn't the Web be self aware by now?

No. It isn't a matter of power or connectivity - this is what people bang on about the whole time. Just as you can't get a trailer full of brain cells to make a a really clever brain. But the fact that I can't build a brain with a bucket of brain cells doesn't mean that you can't build brains out of brain cells. Same with digital computers.

I honestly do believe that intelligence is software (supported on some dedicated hardware, like graphics cards).
 
 
Quantum
14:31 / 17.03.03
By crikey I think we agree! At least in part..

The search for artificial intelligence should be called the search for artificial intentionality "I don't think you can have one without the other. You cannot "pretend" to be intelligent without beliefs and desires (a key part to basic AI)." That's true, I think intentionality is an essential facet of intelligence.

Church's thesis- fair enough I suppose, what is important is the function not the technological details. I would admit Intelligence to anything that had the causal powers of a brain, anything functionally equivalent.

"So the difference between neural nets and digital computers is one of ease of programming. They can still do the same things. The argument "it doesn't operate like a brain, therefore it cannot do what a brain does" is really an argument by incredulity. Its not accepted universally by psychologists, for instance."
nothing is universally accepted by Psychologists However that's not what I'm saying. I'd say "If it *does* operate like a brain, it *can* do what a brain does".
I'm not saying AI doesn't exist in things that aren't functionally identical to brains- maybe it does. Maybe chess computers do think (although that seems like a human tendency to project sentience).
What I'm saying is that to be sure we have created Intelligence the more rigorous the criteria the better. Better to have standards too high than too low (as Turing believed).

The bucket of brain cells is a great example, my point was about function and structure. If you programmed a billion PCs to do exactly what a Neuron does and linked them together correctly then I believe you would create artificial intelligence. It would be functionally identical to a brain.

The barriers we face are our ignorance of Neuroscience and Psychology- we don't know enough about our brains to make something that does the same thing. To sidestep these obstacles we can make something that works like a brain as far as we know, then we can explore the possibilities of different devices supporting intelligence after we know more.

I honestly don't think intelligence is software. Let's disagree.

Mimicry- "Deep Blue *mimics* a chess player because it does not have those mental states. I'm not being circular here.." (me, above)
I don't believe chess computers are sentient or have mental states. So it behaves without intent. So it *mimics* intelligence.
Why don't I believe chess computers have mental states? they're too simple. If they do then I might have to admit other things do (coffee machines, mobile phones, trees...) which a)raises moral questions (It's wrong to abuse the coffee machine) and b)would invite charges of vitalism.
I'm not drawing any subtle distinction here or using the word 'mimic' in any special way. I'm not saying humans have a special status. Mine is an epistemological concern- how would we *know* we had created AI?

The only way I can think of definitely creating AI is to model it as closely as possible on the human brain, something we know to be conscious.
 
 
Our Lady of The Two Towers
10:30 / 18.03.03
But isn't that assuming that the human brain is the only model for consciousness? Which moves us from psychology to philosophy and 'is your pet labrador conscious' discussions. Cohen and Stewart's books on what extra-terrestrial life might be worth a viddy here, as they start on the whole concept of intelligence. Unfortunately, though I liked the book, I didn't like it enough to buy it, so can't recall what they said, but I think it was along the lines of 'we can't assume that human intelligence is the only or best model'.
 
 
Quantum
13:43 / 18.03.03
No no, I'm not saying it's the *only* model for consciousness, I earnestly believe there will be different and better consciousness-making-machines than brains one day. But to get there we need to overcome the first obstacle, making a consciousness-making-machine.

Let's not run before we can walk, let's make a human-like AI, by building something functionally equivalent to a brain. Then we can explore different, weird and alien consiousnesses and how best to produce them.

What I mean is that a Human-like consciousness should be industry standard.

It's akin to the problem of Alien consciousness. If we meet intelligent aliens that don't think like humans AT ALL, how would we know they were conscious?
(To descend into paranoia, what if the aliens are already here and they look like computers? Maybe they replace our PCs with themselves in the night and laugh at us debating their consciousness. What if Aliens come to Earth and don't recognise us as conscious, and blow us up to build a hyperspatial bypass? )
 
 
Lurid Archive
14:18 / 18.03.03
Quantum: I think we agree on a lot. But...

Mine is an epistemological concern- how would we *know* we had created AI?

I think you haven't offered any answer to this. In fact, you have said several times that behaviour isn't sufficient, which leaves me wondering what you will allow. Comprehensible schematics? What if they are unavailable?

Also, I'm not sure what you mean by "functionally equivalent". You don't mean software, apparently (although this is the point of Church's Thesis). So I'm not sure what you mean by

But to get there we need to overcome the first obstacle, making a consciousness-making-machine.

or

Let's not run before we can walk, let's make a human-like AI, by building something functionally equivalent to a brain.
 
 
Quantum
09:00 / 19.03.03
Let's define knowledge as true, justified belief (for convenience, I realise it is not a perfect definition).

I would consider that I *knew* a machine was intelligent if
1) It actually was (so my belief was true)
2) I could justify that belief with recourse to reliable evidence and plausible explanatory theories.

Now 1 is pretty much a philosophical concern (I think objective truth is unknowable, so we don't know what things we *know* and what things we only *believe*) so let's leave it for another thread.
2 is a matter of how sceptical I am, how well informed etc. and the threshold for acceptance of AI is going to be different for everybody.

I would accept intelligent behaviour from something I believed had the causal power of a brain (i.e. an advanced computer, not a speak 'n spell or a doll) with enough evidence to convince me it was not a fraud- schematics, explanations of the principles involved etc. ideally presented by the AI themself. If that sort of evidence were missing then I would probably withold my belief. What reason would someone have to hide it unless they were a fraud?

But an opponent of AI might not accept any evidence at all, no matter how convincing. The task is to make an AI that will convince highly educated, sceptical, intelligent and respected scientists (and thus most people) that it is intelligent.

Functional equivalence- it does the same thing in terms of our interest. For example a rock is functionally equivalent to a hammer if we want to break an egg. A cup is functionally equivalent to a glass if we want to drink water.
The function of the brain we want to replicate is making-intelligence, but we don't know how it does that.
We don't have a sufficient understanding of the brain to make a better model, or extract just the intelligence making bits, or design a better consciousness making machine. But we might have a sufficient understanding of the brain to make something functionally equivalent, if simpler. (we can model a neuron for example)
The main reason I use the phrase is because it's accurate- we don't want to make a brain, we want to make a machine that does what a brain does.
 
 
Lurid Archive
12:56 / 19.03.03
schematics, explanations of the principles involved etc. ideally presented by the AI themself. If that sort of evidence were missing then I would probably withold my belief. What reason would someone have to hide it unless they were a fraud?

Why should an AI have special knowledge of its own schematics? Do you know the intricacies of your brain functions? Do you expect me to explain how my intelligence operates before you accept me as intelligent?

Differing burdens of proof based on physical characteristics (or membership of a species) is bias, in my view.

Functional equivalence- it does the same thing in terms of our interest.

OK, but then I am confused about a couple of points. First, you seem adamant that AI is not a software problem (although I may have misuderstood). Second, you have rejected any observable verification of intelligence - either you don't mean what you say or I have no idea of what "our interest" is in this case. I still don't know what you mean by intelligence. (Which is why a discussion of truth and knowledge is jumping the gun.)

I'm not being pedantic, and I realise that it is a tricky one, but I don't know how you are using the word. For instance, I'm not sure how to reconcile your views on "functional equivalence" and "mimicry", unless by "mimicry" you always mean poorly executed mimicry.
 
 
Quantum
12:02 / 21.03.03
"Why should an AI have special knowledge of its own schematics?" It shouldn't, it would just be an elegant demonstration of intelligence.

"Differing burdens of proof based on physical characteristics (or membership of a species) is bias, in my view." Yup, I'm biased toward accepting Humans as intelligent and required more proof of anything else. Name another intelligent thing except humans- AI will be the first (unless Aliens land) and so has a greater burden of proof than a person (who is just one more example in a long line of instances of intelligent humans).

"I still don't know what you mean by intelligence" It's impossible to satisfactorily define in my opinion. Care to try? I personally, in this context, mean a thinking being comparable to a human consciousness is intelligent.

"AI is not a software problem" not just a software problem but also a psychological, neuroscientific, hardware and philosophical problem.

"you have rejected any observable verification of intelligence" No, I just don't think apparently intelligent behaviour is sufficient in the case of AI (although it is in the case of Human Beings, who have less of a burden of proof). Once there are loads of AIs and it's commonplace, then the burden of proof will ease, but the trailblazers have the hardest time.
These aren't problems with AI in principle, just in practice. In principle the problem of AI is just the problem of other minds rephrased.
 
  

Page: 1(2)

 
  
Add Your Reply