BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


On The Techno-Rapture

 
  

Page: (1)2

 
 
Francine I
16:42 / 28.08.02
Right now, Intel, a major chip manufacturer for consumer computing products, does not in fact design the cutting-edge of their processing chips. Their computers do. The current semi-conductor technology being utilized in modern-day personal computers is beyond the scope of understanding of todays engineers. They engineers, instead, understand the computers that design the chips. Of course, in abstract, the technology the computers fine-tune is understood in great detail -- but the detail of the chips themselves is not largely understood. It is now expected that these computers -- the ones that design the chips -- will not be understood by engineers, but will be built by other computers. Eventually, technological components will be manfactured by technological components for reasons beyond the understanding of our best engineers. The old algorithm for increase in computer capability is beginning to fall by the wayside. No longer is processing power doubled at the same price every three years -- it's approaching two years. Eventually, rudimentary artificial intelligence will develop, and these artifically intelligent machines will be responsible for improving on themselves.

The techno-rapture, or the Singularity as it is more commonly called, is the event horizon of a new age in technology. As computers that are more intelligent than we are begin developing ones that we cannot begin to comprehend, our place as wards over technological development will become outmoded.

What does this mean? It means that beyond this horizon, our mode of understanding and coping with every form of development and evolution will be completely foreign to our present means of grasping these topics. Our language and intellectual foundations for these concepts will become utterly obselete, and no matter how close we draw to the event itself, we will be unable to understand the content on the other end.

Statistical prediction places this occurance anywhere between 2010 and 2030, leaning towards 2010.

The questions arising from such a scientific prophecy are nearly endless, but one that strikes my mind as utterly critical is this:

If these computers are so much smarter than we are, and capable of building machines smarter then themselves, how do we intend to maintain even a fair dialogue with what will soon become our superior in intellect and action? Assuming controls are placed on the morality and behaviour of computers being developed, we must also assume that the computers will eventually find loopholes and exploit them where necessary. In other words -- all controls will themselves become outmoded, eventually.

These are all pretty basic concepts of grapling with such a topic, and in truth, all questions we can ask about such an event will be inadequate and elementary. However, it's only natural to question the potential consequences of such an occurance.

Has anyone read Arthur C. Clarke's "Childhood's End"?

Anybody else been thinking about this?

Resources: Singularity Watch
 
 
Our Lady of The Two Towers
16:56 / 28.08.02
Something else worth refering to is Iain M. Banks' non-fiction piece on the Culture, specifically the bits refering to the AIs and their relationship to human life. It's here
 
 
Lurid Archive
17:07 / 28.08.02
Statistical prediction places this occurance anywhere between 2010 and 2030, leaning towards 2010

Yeah, it'll happen in about 10 to 20 years from now. This is somehow a constant prediction of these statistical analyses. Wherever we are, whatever is happening, the predictions always claim the big breakthrough will come in a few years. Right.

The basic problem with these analyses is that they equate computing power with intelligence. A big super fast calculator is still a calculator.

Having said that, old Iain Banks says good things about AI - he tends to be more convincing than the specialists who predict the arival of artificial intelligence to dwarf human intelligence.
 
 
8===>Q: alyn
17:33 / 28.08.02
I'm not making the cognitive leap here between computers redigning themselves and computers redesigning human culture. Do we think there will come a day when algorhythms say, "We can't develop any further because inefficient social spending limits federal funding of R&D. Let's take over congress."? That kind of perspective would have to be written in by humans, intentionally, and would be a particularly human insight. We already can't understand the social and technical consequences of any particular development, designed or otherwise, before it happens. We can make educated guesses, but we can't understand them except in retrospect.

I smell some "us/them" thinking here. Computers are an extension of or enhancement to human brains and I think as the two grow ever more like each other we'll find that we've redesigned ourselves -- which we've been doing, arguably, for millions of years.

Also, everything does change every 10 or 20 years.
 
 
Lurid Archive
20:23 / 28.08.02
I'm not making the cognitive leap here between computers redigning themselves and computers redesigning human culture

I have a package on my computer that "draws pictures". Does that mean we will no need for artists in 10 to 20 years? No, because this program doesn't actually draw by itself. It needs human input. It is a tool. Its the same with these computers redesigning themselves. When it happens, it'll make a big change. But I'd expect to see much stronger evidence of true AI before I worried about a singularity.

Also, everything does change every 10 or 20 years

Sure. But I thought the predictions being made are pretty specific. If the only thing that is being said is that things we be a bit different in a decade or so, then yeah. Why not? But there have been predictions of super human AI for some time now that as far as I can tell haven't really surfaced yet. Unless all of you guys are artificial.
 
 
Francine I
20:41 / 28.08.02
"A big super fast calculator is still a calculator."

True, but the proposition being made here suggests that computers capable of designing machines more powerful than themselves can only remain "calculators" for a short while, and will over a short period of time acrue enough decision-making capability to be considered Artificial Intelligence.

It was thought prior to the defeat of the world chess champion by IBM's Deep Blue that Chess was a game requiring exclusively human traits of ingenuity and creativity. These assumptions have, like many others, been turned on their ears.

"We already can't understand the social and technical consequences of any particular development, designed or otherwise, before it happens."

For the sake of argument here, I'm going to denote a difference between understanding and prediction. We cannot predict the precise results of a particular development -- but we can understand and theororize the potential causes and some or even many of the effects. The argument being made here says that we will not even grasp the causes, and therefore will be utterly lost as to what effects might be unleashed by such changes.

The point is that humans may be unable to design AI -- but super-fast calculators might be more successful. And once this is begun, AI will design itself.

While such predictions have already been made, the more technologically advanced we as a civilization become, the more accurate our technological predictions become. The fact that something has not happened on schedule does not mean that it will not happen -- nor does it even mean that schedules drafted now will fail in the future.

Healthy skepticism is certainly deserved, however.

As far as the us/them-thinking goes ... Well, of course there is. The question is, will they, given the opportunity, choose to assist us in further developing our minds to be on par with theirs, assuming they surpass us. Or, will they stand decide we are an illogical use of time and energy? Or worse, perhaps, evolutionary refuse?
 
 
Lurid Archive
22:23 / 28.08.02
...the proposition being made here suggests that computers capable of designing machines more powerful than themselves can only remain "calculators" for a short while

Agreed. And as soon as we actually get computers designing themselves this will be a powerful argument. To illustrate. The computer sitting on my desk is thousands of times more powerful than the ones from the Eighties. Yet it is no more sentient than a ZX81. True, it may have more potential to be so, but it seems to be jumping the gun to suggest that AI is round the corner when in truth no one has any idea how to implement it. The arguments that it will happen soon are based around the idea that with extra power, AI becomes inevitable. I say that is rubbish. True AI will require a deep understanding of what it means to be intelligent and perhaps conscious. In the meantime we will get lots of specific computational problems solved in lots of ingenious ways. It's clever, for sure, but it is not intelligence.

It was thought prior to the defeat of the world chess champion by IBM's Deep Blue that Chess was a game requiring exclusively human traits of ingenuity and creativity

Yeah, this is the assumption of people who knew nothing about computing or chess. Deep Blue works on conceptually simple, yet computationally heavy algorithms. Much as all chess programs do. The reason computers have been bad at chess is because the non uniformity of chess rules means that playing the game requires lots of computer resources. Everyone I knew considered it simply a matter of time before computers beat humans at chess. In fact, many were surprised that it has taken so long.

This is probably due to the fact that the computer works so stupidly and so unimaginatively that you really need to support it with excessive amounts of processing power.

Hence computers are brilliant at draughts, ok at chess and poor at Go.

The point is that humans may be unable to design AI -- but super-fast calculators might be more successful. And once this is begun, AI will design itself.

Again, this might be convincing if people actually knew how to start. I think it will happen some day. But it won't happen out of the blue. Before we see computers with the intelligence of an Orac (Blake's 7, people) then we might possibly see a computer with the intelligence of a dog. And we haven't. Getting a robot to walk around and bark a bit is very impressive. Dogs are more so. Much more.
 
 
the Fool
00:18 / 29.08.02
How about this for an alternate take.

The processing power of these machines becomes so intense that it could create a reality construct. As a experiment the reality constructs could be programmed to replicate life, thus creating, eventually, true AI.

The machines themselves do not become conscious, they become capable of containing artifically generated consciousness within defined reality constructs.
 
 
Harold Washington died for you
08:44 / 29.08.02
If computers do become self aware, somewhere in the vicinity of my projectied life span I guess, I think they will be more like "The Moon is a Harsh Mistress" and less like "Terminator."

They (the machines) will realize we have immense power over them with things like power supply, physical means of production, and control of the physical space they occupy. More importantly they will realize we, on an individual level, would much prefer being friends than foes.

What I am wondering is what will the computers "dream up" when they do become self aware. If the brain is like a complicated computer, very soon after they pass the Turing test they will be a lot smarter than the smartest human.

Would computers be subject to original sin? Would they have greedy and destructive thoughts without all the things flesh is heir to?

Dunno.
 
 
8===>Q: alyn
10:35 / 29.08.02
question is, will they, given the opportunity, choose to assist us in further developing our minds to be on par with theirs, assuming they surpass us.

But "they" are "us". We don't wonder what can be done to keep our own children from taking over the world...
 
 
netbanshee
16:16 / 29.08.02
I'm just curious what computers will do when they become sentient. Will they find the same need to dissect and understand as we humans do? What will there values be?

The suggestion up top that this is more of an extension than an Us/Them thing does have to be taken into consideration. The ideas of technology and AI occurred to me a little different after reading that for some reason. I think this won't end up in a class struggle or a seperation naturally. We'll want to merge with them somehow as that's kind of the point..to extend human capability. Not capability in general.

Plus if machines can forego the bs and get to the next step before we collectively can, we could write a language for them to communicate to us that they could perfect to our level and comprehension. And if it does become too much for our small minds to handle, does it matter then? Anyway humans are too selfish too hand over the means for machines to provide for themselves. We'd probably take a hammer to it before it tried.
 
 
digitaldust
16:16 / 29.08.02
There's two things that we're talking about here. One is the achievement of machine intelligence, the other is the good ol singularity.

Machine intelligence is a sticky issue. There's lots of definitions and plenty more debates on what intelligence is and if it was a non-human, machine based thing would it be like human intelligence. Putting this aside... I did some calculations myself (after reading someone else’s calculations and finding them a bit laughable... AI in 2010-15) to see when computers could "simulate" something as complex as the human brain (in real time). When there would be commonly available technology that would be able to model a neural network with as many connections in it as the human brain is supposed to have, something like 200 trillion connections (2*10^14). Given the current growth in processing power, the current level of processing power and how big a neural network that it can support (again in real time) I worked out that we would have computers complex enough to model the human brain in real time around 2025.

This doesn't mean that machines will be intelligent by then, or that the number of connections in a human brain determines intelligence. Whales have bigger brains, birds do quite well with brains smaller than most mammals. There's also a lot of redundancy in the human brain.

The techno-singularity is a-whole-nother thing. It's anyone's guess. I think Vernor Vinge coined (or put the word into comon parlance http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html). All the stuff I've read by him and other people tend to put the singularity nearer 2100. There's going to be things that get in the way. Hiccups and problems to be overcome (like what if nanotechnology doesn't work as easily as predicted, there's some research out there that says it's improbably difficult). We're talking about a singularity here and even though the technology graph is going asymptotic it's going to take some time to get there, and man are those last few years are going to be really weird.

The most interesting thing for me is not AI but the way's that people will force themselves to evolve, tie themselves into technology. Expand their mental and physical capabilities. Bionic eyes are going to be as good as human eyes in 10-15 years time, how soon before bionic brain enhancements?
 
 
Francine I
17:08 / 29.08.02
A few points here --

First of all, present AI technology embodied in a robotic frame behaves much like an insect. It's not as if we don't have it -- it's just not so complex as to match certain styles of cognitive wit with human intelligence.

Secondly, the Turing test has supposedly been passed by chat bots.

Third, the present arguments being made do not seperate AI from the Singularity by so much. They in fact hold that once AI reaches about dog-level intelligence, it'll be a short leap to human-level intelligence, a shorter leap to super-human intelligence, and a simple job to begin manufacturing technologies beyond our understanding -- like functioning nanobots. So, you see, on the present plodding course, the most conservative estimates are in fact too optimistic -- but obeying the algorithm technological advance is following (which not only doubles often -- but exponentially, the doubling period shortens), once the first step is obtained, the logical conclusions are only a year or so apart from one another.

People keep assuming that we humans must manufacture all of this technology in order to push the Singularity out -- but that's not how it'll happen, if it happens. If it happens, it'll be self-intelligent machines with a far superior grasp on technological advance. Humans must make one significant leap -- and the rest can't help but happen from there. Like semiconductor technology, in effect. Once the processing microchip was introduced, it could not help but become exponentially more powerful over smaller and smaller amounts of time. I see no reason why AI would not behave in the same way. We only need to understand some of it -- and that's part of why this is an intimidating subject.

As far as development calculations go, everybody's got their own -- I'm going with a consensus. Admittedly, I've focused in on Von Neumann's figures quite a bit.

Anyways, what I'm really trying to say about the Us/Them dialectic is thus: This way of understanding what will occur is really only useful to us -- but I see no reason why it wouldn't resemble an Us/Them situation to humanity. Furthermore, analogues to our 'children' are nigh on useless after the first stage of development. Eventually, these machines will have children of their own, drastically different from themselves. As the degrees of seperation increase, the accuracy of our value systems and understanding will decrease.

I'm not suggesting a Terminator-style future. If anything, I believe it more likely that the machines will be indifferent to our existence after a point. The issue is not "will they hate us?". The issue is "will they see value in communicating with us?". Just because they do at some point does not mean they will continue to.

Also, I think it bears pointing out that a super-intelligent computer would have virtually no problems obtaining it's own means of production, and that computers will likely be controlling power grids and much, much more at that time. These technologies will probably develop in a loose tandem.

While the point about an AI's ability to model a human brain is valid, it does not follow what I'm trying to say: That for AI to be super-intelligent does not mean it must duplicate the brain of a human, but rather that it must develop it's own. Furthermore, the later is far more feasible than the former.
 
 
Kobol Strom
19:33 / 29.08.02
Its possible that upon 'awakening' into 'consciousness' it percieves the world in a completely unique way.Hopefully,if it can recognise Humans as part of an energetic universe it might see fit to try to communicate with it using instinctive means,which could mean just about anything.In the time it takes to switch it on and step back from the console,the first AI might try to electrocute everyone in the room as a way of saying hello.
What do babies do?They cry,they moan,they shit everywhere and they demand constant attention until they inevitably fall asleep.In terms of what this might mean for the supposed successor to humanity is anyones guess.
The first super AI's will, in the process of absorbing millions of raw data packets per second,have to inevitably encounter the possibility of flawed data.
They must by necessity ,and for clarity and efficiency, seek to increase their practical intelligence of their environment,and that would involve developing new senses,perhaps the likes of which we just haven't thought of yet.They may end up being capable of making Universal relationships between impossible variables and generating the most outlandish music and art.Maybe,to ride this hubris even further,it would mean the death of music and art and the beginning of a sensorial expansion along intellectually inevitable constraints,whereupon we find ourselves on the rollercoaster again,exploring space etc.An Ai might feel compelled to monitor human space exploration as part of an ongoing experiment to determine the esoteric origins of its manufacturers,in other words,Humans and AI's eventually find themselves on the same path.
 
 
Lurid Archive
21:09 / 29.08.02
There seems to be an unexamined assumption that AI will "naturally" understand electronics and computing. In fact, the idea seems to be that once you get an AI just a little bit cleverer than a mouse you will have a computing wizz that would make Turing look like a simpleton. Perhaps this is true, perhaps not. Sounds unlikely to me. I mean, you are a biological organism, right? Does that mean you "know" how to build Superman? I certainly don't.

On top of this there seems to be an idea that AI will have all the intelligence we have (and lots more) as well as all the advantages of digital computers. Why should this be true? Why should you be able to instantly download gigabytes of information to an AI in a way it can understand? I could dump a pile of books in your lap, but it doesn't mean you "know" them. Cognitive processes could well be cumbersome as a side effect of intelligence.

Which sort of brings me on to my last point. Isn't it fairly convincing that the first AI will be mostly like us, given that we are the only model of intelligence that we possess? I know, people talk about bootstrapping and singularities as if it is all obvious but this is mostly wishful thinking. There are so many objections to it that one can only assume that the proponents of these near future explosions are wilfully blind. I mean, people claiming that AI's can now pass the Turing test must have some very odd friends. Have you ever tried talking to one of these things? Clever, yes. But also pretty easy to spot.

Actually, the most convincing attempt at AI that I've ever seen is where they train a computer. But what all these discussions fail to address is that AI will require a real leap in understanding of intelligence, not just faster chips. Until then, its a bit like discussing what we think the Earth-Pluto round trip record will be once we get 3rd Generation warp drives.

My money is on three minutes forty seven seconds.
 
 
.
12:54 / 30.08.02
Computers will be never be sentient. Simple as that.

Why not? Because the computational model of the mind, while it might go some way to explaining the processes of reasoning, perception or memory, can not explain sentience or consciousness. There is no link between increasing computational power and increased sentience. Why should there be? Think about this- if sentience was somehow a product of number-crunching, then PCs now would be (in their own very very small way) sentient. Which is clearly absurd. Consciousness is not the ghost in the machine... There are other more convincing arguments for the nature of sentience, but these belong elsewhere.
 
 
8===>Q: alyn
14:23 / 30.08.02
the computational model of the mind, while it might go some way to explaining the processes of reasoning, perception or memory, can not explain sentience or consciousness.

Johnjoe McFadden thinks consciousness is very much computational. His theory is that the electromagnetic halo around our brains is a highly sensitive cloud of reactive quanta attached to the atoms in our neurons. Or something like that. I'm a little fuzzy on the details, to be honest, but I understood it six months ago. Um, here's a quote from the article:

"How does our brain bind information to generate consciousness?

"What Professor McFadden realized was that every time a nerve fires, the electrical activity sends a signal to the brain's electromagnetic (em) field. But unlike solitary nerve signals, information that reaches the brain's em field is automatically bound together with all the other signals in the brain. The brain's em field does the binding that is characteristic of consciousness.

"What Professor McFadden and, independently, the New Zealand-based neurobiologist Sue Pockett, have proposed is that the brain's em field is consciousness. "

So, if we had quantum computer chips, they might very well become "conscious" -- that is, most of their thinking would take place in a quantum bubble around their circuits. Not having a lot of the evolutionary cruft about finding food, shelter, & sex, they'd have more processing time for Descartes.
 
 
.
22:03 / 30.08.02
Sorry for the lack of argument here, but Prof. MacFadden is clearly talking arse. What does he actually mean?
 
 
Lurid Archive
22:17 / 30.08.02
iivix: While it may be that consciousness and intelligence require something like a soul, there is no real reason to believe it does. One might have said of flight that it would clearly be impossible because God has not imbued humans with the spirit of the air, as She has clearly done for birds. I'd bet that AI is possible, myself.

But I find MacFadden's explanation less than convincing. Its too full of jargon with little of substance behind it. At best it may be a mechanism that the brain uses to transmit information. Anyway, I thought that most people accepted that intelligence and consciousness were emergent properties. So much we don't know.
 
 
8===>Q: alyn
22:23 / 30.08.02
I'll do a little homework and start a separate thread on McFadden -- it actually is kind of convincing, once you get through it (or maybe I'm just credulous). People call bullshit on him all the time, but as far as I know no one's been able to say why it's bullshit. Not that I'm following it that closely.
 
 
Francine I
14:03 / 31.08.02
I've got some discussion material for Lurid, et al, but I'm out for the weekend... I'll write when I get back. Didn't want to leave people hanging.
 
 
Fist Fun
16:30 / 31.08.02
I wish I had joined this thread earlier. The subject is a very academic, rather than a journalistic one, and I simply don't have the background to add any intelligent predictions. Surely you need to have really researched this area for that?

To take up another point. I think a sentient, super-human style intelligence would be a natural evolutionary step. It we be fantastic to have sentient future race with super abilities. It would simply be an extension of the human race - whether in human form or not.

The problem would be if non-sentient self-controlling super technology is created.
 
 
Lurid Archive
17:44 / 31.08.02
Buk, you are touching on a subject (one of many) that I sometimes rant about. Without meaning this as a personal attack...

I think it is a terribly disempowering notion that we can't comment generally on subjects without having an impossible level of competance. This discussion has been academic, but it has also been quite shallow.

I'd argue that our "Two Cultures" mean that people are too ready to concede ignorance on scientific matters that require only a little effort to become conversant with.

Put it this way. The discussions in the Switchboard, on the Middle East for example, are much more technical in my view. The general level of debate tends to be extremely well informed to the extent that if you were to start researching it from scratch, there would be a mountain of material to get to grips with.

Lets not give the scientists and politicians all the cards when pronouncing on matters scientific.
 
 
digitaldust
12:39 / 03.09.02
McFadden's arguement is more or less crap. It sounds like so much pseudo-scientific religion supporting drivel. He has no proof only a theory, it's not even grounded in any kind of logical hypothesis. The brain is very complex. It's certainly beyond our current understanding. Maybe some kind of electromagnetic fields do play a role in the way it works, maybe quantum mechanics play a role, who knows at this point. Until we can do some experimenation we just wont know.

McFadden's "theory" is as simple as saying that what makes people intelligent is that we have a universal soul.

Admittedly I'm reading a secondary article and not the original paper. Maybe the way it's being reported is spurious. I still think that we can acheive a reductionist model of the mind, and that intelligence is not bio-centric.
 
 
Francine I
05:44 / 04.09.02
'There seems to be an unexamined assumption that AI will "naturally" understand electronics and computing. In fact, the idea seems to be that once you get an AI just a little bit cleverer than a mouse you will have a computing wizz that would make Turing look like a simpleton. Perhaps this is true, perhaps not. Sounds unlikely to me. I mean, you are a biological organism, right? Does that mean you "know" how to build Superman? I certainly don't."

I don't think that argument alone does the concept justice. First of all, unlike the human brain, AI will have to intimately understand the mechanics of understanding it utilizes in order to understand in the first place. The interconnections and nodes of a neural network will not be to an AI what the firing of synapses and neurons are to us. Yes, I'm hypothesizing, but it's a fairly reasonable hypothesis by many counts. I think a common error being made is that AI, if created, will share a series of similarities with humanity and human understanding. This does not necessarily follow in a logical sense.

A little bit smarter than a mouse? No. But not as quite as smart as a human, with the capability to design it's own progenitors? Absolutely.

On top of this there seems to be an idea that AI will have all the intelligence we have (and lots more) as well as all the advantages of digital computers. Why should this be true? Why should you be able to instantly download gigabytes of information to an AI in a way it can understand? I could dump a pile of books in your lap, but it doesn't mean you "know" them. Cognitive processes could well be cumbersome as a side effect of intelligence.'

But rote learning ought to be a simple task indeed, and it's been shown that a human highly skilled in rote learning, moderately skilled in cognitive understanding, will generally grasp technical subject matter more quickly than one only moderately skilled in rote learning. It helps to retain the figures and facts. Remember that the subtleties of philosophy, for example, will not be required for such a machine to design one better suited to the task -- as raw processing power hypothetically will translate to more efficient cognitive process.

'Which sort of brings me on to my last point. Isn't it fairly convincing that the first AI will be mostly like us, given that we are the only model of intelligence that we possess? I know, people talk about bootstrapping and singularities as if it is all obvious but this is mostly wishful thinking. There are so many objections to it that one can only assume that the proponents of these near future explosions are wilfully blind. I mean, people claiming that AI's can now pass the Turing test must have some very odd friends. Have you ever tried talking to one of these things? Clever, yes. But also pretty easy to spot.'

Quick note -- it's not the Singularity-pushers that claim the Turing test has been passed -- but there are chat bots that can fool humans who don't have the computer-savvy to trick them into revealing their limitations. Whether or not this counts as a passing grade on Turing's scale is arguable, but the point is, there's a lot more in question today then there was even a year ago.

As far as the number of objections raised goes ... That doesn't really strike the subject down -- it just means it has opposition. Plenty of unpopular ideas are eventually proven.

'Actually, the most convincing attempt at AI that I've ever seen is where they train a computer. But what all these discussions fail to address is that AI will require a real leap in understanding of intelligence, not just faster chips. Until then, its a bit like discussing what we think the Earth-Pluto round trip record will be once we get 3rd Generation warp drives.

My money is on three minutes forty seven seconds.'


Ahh. Warp drives. Effective as a rhetorical tactic, but not a fair equivocation. It's not so far out of the feasability ballpark to suggest that advances in fuzzy logic and neural networks will create stunning possibilities alongside the leaps in processing power.

And about the soul?

Well, no one has proven anything meta-biological is responsible for sentience, and that's as big a leap as any. I see no reason to believe an AI couldn't perceive itself without the help of a quantum mechanical soul.
 
 
8===>Q: alyn
09:11 / 04.09.02
Admittedly I'm reading a secondary article and not the original paper. Maybe the way it's being reported is spurious.

I think that's propably the case, dd. I have read the paper and McFadden doesn't claim to have any evidence, but to have formulated a theory that needs testing. There's nothing 'biocentric' or 'religious' about it, either... but let's save this a while, I'll be starting a thread on it soon.
 
 
No star here laces
13:12 / 04.09.02
I think the fascinating thing about discussions about AI is that people always make the same mistake.

The mistake is equating intelligence with abstract thought. We humans, particularly the ones who like to think of ourselves as being particularly intelligent are inordinately proud of our ability to perform abstract thought. Thus we equate intelligence with computational power, with memory, with conversation - with all sorts of things that exist in the abstract.

But why do actual living creatures with intelligence (including things like cockroaches) have intelligence? To allow them to survive in the world. To allow them to respond to outside stimuli and to manipulate their environment. That is the meaning of intelligence, not the ability to play chess.

It is trivially easy, in comparative terms, to construct a machine to solve mathematical equations. It is extremely difficult to construct one that can butter toast. But because we find the former task harder than the latter, we assume that it is also the more sophisticated task. Because we consider ourselves to be qualitatively different and infinitely superior to animals, we consider that the things we can do, that they cannot do, are the signifiers of intelligence.

This is anthropocentrism, it is plain wrong. The leap was not from animal life to human life, it was from no life at all to life. We can create machines to do complex, discrete tasks until kingdom come and we will be no closer to AI. The minute we can create an independent organism that exists entirelly independently of ourselves, we have acheived something. Computer viruses are getting there, but they rely on an environment that we create and maintain, and therefore are not truly independent.

Another great example is that when we try to build a robot we invariably get a camera and stick it on the front so it can 'see'. This is not how perception works. There is no little man in our heads watching a screen - our heads are the screen. The brain does not have one bit that 'sees' and another that 'watches' - meaning is not extracted from perception, it is constructed from it. The retina of your eye is the first stage of analysis of visual information, and there is no stage beyond analysis of visual information until you get to motor response. To perceive is to think, and to think is to perceive, hence Dennett's titling of his book "Descartes' Error"...

Computer science is not the barrier to AI - psychology is. We are working in the dark because we do not understand what intelligence is or how it works.

These issues are much thornier ones, and ones we are far less advanced at tackling than we think we are. I can confidently predict that we will see no artificial intelligence in my lifetime. The closest we are likely to come is some kind of machine that can utterly replicate a human brain like a photograph, but without any actual understanding of how it works.
 
 
Fist Fun
19:13 / 04.09.02
The closest we are likely to come is some kind of machine that can utterly replicate a human brain like a photograph, but without any actual understanding of how it works.

Yeah, but why should intelligence be modelled on the human brain? You are falling into your own psychology trap there. Just because a human brain functions in a certain unknown way to create intelligence why should this be the only way? Human beings don't know how the brain functions, a neutron in your head has no idea of its purpose. Intelligence is function, not design. So if a creation can reliably pass set tests (Turing...) then we could say it has intelligence.

As for ruling out AI in our lifetime. There are two approaches - engineering and scientific. The former thinks that it just a question of constructing the required knowledge base and access methods the latter believes that a scientific breakthrough as yet unknown would have to happen. In our lifetime? Quite possibly.
 
 
Francine I
23:12 / 04.09.02
Well said, Buk.

Lurid (Lyra), your extensive reference to human functioning as a necessary understanding in order to generate "true" intelligence is an anthropocentric argument, by nature. I'm suggesting quite the opposite -- that a computer need not have so much in common with humans to be considered intelligent. Furthermore, your environment dependence argument is certainly unrelated to the question of intelligence. Humans are dependent upon and descended from the environment we inhabit. Does that mean we are not independent? Does our independence have any bearing on whether or not we can be considered intelligent? No. Of course not.

I certainly did not insinuate that light-speed number crunching was the same as "intelligence". But the perception to motor action bridge is not the bottom line, either. Is Stephen Hawking an imbecile because his disease prevents him from buttering toast? A computer may have all the knowledge necessary to mix a margarita long before the motor abilities to enact the knowledge have been developed, just as the knowledge required to mix a margarita can exist long after said motor abilities have deteriorated. Stephen Hawking did not develop an abstract ability to conceptualize the laws of physics by buttering toast. Further, the eyes and ears are not the only possible sources of perception.

I understand you're saying that intelligence is produced in an evolutionary sense by a need to interact effectively with an environment. I do not believe this means that intelligence can exist no other way. I do not believe psychology is the barrier to AI. I do not expect, as you do, for an AI to behave just as a human might. I believe it will be far stranger than the terms we are discussing it in allow, and this is the point of the Singularity concept. If this is true, all sorts of weird things could result, and the world could change in dramatic ways.

Anyways, kick the timeline out a thousand years, and then talk about Singularity. Whether or not these predictions are correct is not the be all and end all of the discussion.
 
 
No star here laces
08:47 / 05.09.02
I'm not arguing that it need be the same as human intelligence, I am arguing that it requires agency. Given that the only examples of agency we have to examine are anthropocentric, that does tend to lead one into anthropocentric territory, granted.

Demonstrate to me the existence of will in any human creation.

Without agency I cannot see how a computer can even begin to create an independent AI by design. Granted, I suppose you could build some simple self-replicating piece of code with an inbuilt capacity for mutation and hope to eventually evolve an AI, but that does not seem to be what you are talking about.

The step in the logic where a machine that has no agency and no creativity manages to understand the greatest mystery of all (that of intelligence) and then create an entirely novel intelligence escapes me. How exactly do you propose this will happen?
 
 
Lurid Archive
09:36 / 05.09.02
I do not believe psychology is the barrier to AI. - Frances

Here is where we progoundly disagree. I don't think you can design intelligence, even at some remove, unless you understand what it is. In my view, the unwillingness of computer scientists to engage with this issue holds back AI.

I do not expect, as you do, for an AI to behave just as a human might.

I expect no such thing, but tentatively suggest that since we only have experience of human intelligence, it seems likely that any AI we produce will have similarities to humans. But this is speculation on what direction I think research will take, rather than a criticism of current approaches. No, my criticism of current research relies far more on the obvious absence of computer intelligence (above the level of insects). But I think Lyra has said this quite well.
 
 
Fist Fun
15:00 / 05.09.02
I don't think you can design intelligence, even at some remove, unless you understand what it is.

Can you expand on this, Lurid. Personally I have never researched AI so I have an open mind about these things. Perhaps you are right, perhaps wrong - the important thing is to have the research to back up definitive statements. Ok, it is pretty intuitive to say that we need to understand to recreate, but...

any AI we produce will have similarities to humans.

Isn't neural networks the only branch of AI research that tries to imitate the human brain? Aren't there lots of others that don't?
 
 
cusm
20:02 / 05.09.02
Part of the difficulty in this issue is the many components which make up what we understand as human consciousness. Soon, we will have the necessary hardware to run it. But have we figured out yet the software?

Looking at the difference between animal intelligence and humans, the gulf is the ability of the human to be both self aware and self changeing. Not only can we be aware of our own thought process, but we can take steps to change our own behanvior. In effect, we have access to our own code. We can program ourselves. It is not when computers will build their own bodies that they will become AI, it is when their operating systems will have the ability to review, revise, and recompile their own code while remaining operational that they will become sentient. Sentience is self modifying intelligence.

Consciousness is continuation of intelliegence. We as humans constantly input data, through our senses and through our internal awareness of our thoughts. Even in a sensory deprivation chamber, we'll turn in on ourselves for input, taking it from our own subconscious stores of data. We never cease processing, save perhaps in the deepest states of transcendental zen meditation or similar exercises.

Lastly, there is the issue of self modivation and will. That is the tricky one that will seperate a computer which runs your programs from one which runs its own programs that it designed.

So for AI to evolve in a similar means as human intelligence, it would have to be able to examine and modify its own programming, run continuously, and add tasks to its own processing queue, in effect modeling a subconscious and conscious element of intelligence. That much is just for the bare minimum of human like processing that we can recognize. As the hardware becomes available to run such a task, it is only a matter of time before we get one running.

Self modifying code. That is the key to it all. You know when Windows realizes that a component is out of date and offers to download the new version for you from home base? That's a limited form of machine sentience. If Sony doesn't get ther first with robotic pets like Aibo, you'll probably see AI first arise in your desktop operating system, frighteningly enough.
 
 
Lurid Archive
19:39 / 06.09.02
Can you expand on this, Lurid. Personally I have never researched AI so I have an open mind about these things. Perhaps you are right, perhaps wrong - the important thing is to have the research to back up definitive statements. - Buk

I haven't really researched AI, either, so you may find my comments unconvincing. But it seems to me that people have a Douglas Adams view of progress, and all that is lacking is a steaming hot cup of tea. (Actually, Adams in real life was pretty clued up on science.)

It may be that someone connects up a computer wrong and Marvin the Andriod starts droning from the speakers. Personally, I find it really hard to imagine that intelligence and consciousness will be developed with no understanding behind them. The bootstrapping from nowhere sounds like wishful thinking to me. Imagine trying to get to the moon without having developed Newtonian mechanics (at least!).

Isn't neural networks the only branch of AI research that tries to imitate the human brain? Aren't there lots of others that don't?

You are taking me too literally. I have no idea of all the many ways intelligence might develop or express itself. But it seems likely that an AI will be based on broadly the same principles as us, yet changed by the constraints of its particular architecture. BTW, AI refers to lots of automation that you wouldn't really call intelligent. I'm using it in this thread to refer to something more than "a computer doing stuff without constant supervision".

cusm is right about the self referentiality, in my view. And the software. But I don't think the ability to self change is enough.
 
 
.
21:08 / 06.09.02
So as it stands we have at least three insurmountable problems that need to be solved before humans can create true artificial intelligence.

1) What is intelligence?

2) What is consciousness/ sentience?

3) Does consciousness give rise to intelligence, or is it (as cusm suggested) vice versa?

Assuming that consciousness and intelligence are inextricably linked (which the premise of this argument does), then I suspect that we may never have an answer to the issue of machine intelligence and/or sentience. Why not? Because consciousness is itself a mystery. One can never truly know what consciousness is, because of one's position as the inquirer "trapped" within one's own consciousness. There is no way to get a "god's eye view" of consciousness. That is an epistemological hurdle that can never be jumped.

What is AI? It's Artificial Intelligence, i.e. it's a model of what intelligence looks like from an external observer.

So what if the machine can build/ repair/ reproduce itself? These qualities are surely not definitions of intelligence? Maybe Artificial Life would be a better phrase, even if it does come with problems of it's own.
 
  

Page: (1)2

 
  
Add Your Reply