BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


Minsky: AI is rubbish now

 
 
w1rebaby
14:31 / 13.05.03
Saw this Wired article though I haven't found any transcripts of the speech it's talking about.

"AI has been brain-dead since the 1970s," said AI guru Marvin Minsky in a recent speech at Boston University. Minsky co-founded the MIT Artificial Intelligence Laboratory in 1959 with John McCarthy.

Such notions as "water is wet" and "fire is hot" have proved elusive quarry for AI researchers. Minsky accused researchers of giving up on the immense challenge of building a fully autonomous, thinking machine.


Much as Minsky is seminal as a whale's scrotum, I don't agree with him here. Or, rather, I agree, but I don't think that's the point any more.

I've said numerous times here and elsewhere that "building a fully autonomous, thinking machine" is not what we should be thinking about in AI. It's vastly expensive (and not likely to be funded), by the most optimistic projections extremely difficult and time consuming, and doesn't even have a good philosophical grounding. What's more... what's the point? We already have billions of autonomous thinking machines, and more every day. Understanding how they work, sure, very useful. Using AI as a tool in this, great. As a goal?

"We're building systems that detect very subtle patterns in huge amounts of data," said Tom Mitchell, director of the Center for Automated Learning and Discovery at Carnegie Mellon University, and president of the American Association for Artificial Intelligence. "The question is, what is the best research strategy to get (us) from where we are today to an integrated, autonomous intelligent agent?"

How autonomous and intelligent would we ever want an integrated agent to be? More so than today, less so than a human, in my opinion. What we really want is an idiot-savant agent, one which is easily controlled in most respects but does some things - which we don't want to do - really, really well. This is the pattern for most devices. We don't want to walk and carry things, so we build cars, which do nothing except move things around, but do that a lot better than we do.

"The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting 3 years of their lives soldering and repairing robots, instead of making them smart. It's really shocking."

(Actually, I have some sympathy with him here, but I think that's my distrust of hardware coming out.)

There's a very good point relating to public perception of AI at the end:

AI researchers also may be the victims of their own success. The public takes for granted that the Internet is searchable and that people can make airline reservations over the phone -- these are examples of AI at work.

"It's a crazy position to be in," said Martha Pollack, a professor at the Artificial Intelligence Laboratory at the University of Michigan and executive editor of the Journal of Artificial Intelligence Research.

"As soon as we solve a problem," said Pollack, "instead of looking at the solution as AI, we come to view it as just another computer system."


Spellcheckers - not AI, just computing. AI is only AI if it doesn't actually work. Or comes back from the future to kill Edward Furlong.
 
 
No star here laces
15:33 / 13.05.03
Minsky v Brooks..... FIGHT!

IMHO this is because AI is being investigated by computer scientists and neuroscientists and not by mathematicians or philosophers who might have a more holistic view.

Minsky is, however, a relic of the heroic age of science. I'm sure that many have said similar things about physics....
 
 
Lurid Archive
17:26 / 13.05.03
I really don't think that there is a problem with computer scientists not being holistic. AI is hard work, when all is said and done.

I've said numerous times here and elsewhere that "building a fully autonomous, thinking machine" is not what we should be thinking about in AI. - fridge

To an extent I agree, fridge. There are so many useful things AI can and will be able to do and that a true thinking machine is so far off the scale that it seems a silly goal. On the other hand, wouldn't it be a truly historic scientific accomplishment that is more important than the utility would imply? Like going to the moon? Thats the way I think of it, anyway.

Also, what do you mean about philosophical grounding? That we have no idea how to build a thinking machine? Do you really think it is a philosophical problem, rather than an engineering/mathematical one?
 
 
Perfect Tommy
18:50 / 13.05.03
On the "stupid little robot" question: Unless I'm misremembering my pop science magazine articles, isn't Brooks the fella behind the idea of building robots that act like insects, so they teach themselves to walk based on sensor data and hierarchies of rules, rather than by "knowing how to walk"?

If so, I'd tend to recommend Minsky can fuck off, Granddad. (At least, philosophically; as for practicality, anything I know about Brooks is 10 years outta date.) I'm inclined to think that the brain is a lot more like a hive of insects following microrules than being driven by a homunculus that knows everything, so stupid-little-robots seems like the way to go.
 
 
Linus Dunce
19:15 / 13.05.03
Also, what do you mean about philosophical grounding? That we have no idea how to build a thinking machine? Do you really think it is a philosophical problem, rather than an engineering/mathematical one?

I don't think anyone knows how to build a thinking machine. We know how to build toy thinking machines, automatons, but that's not the same thing. Perhaps, in order to progress, we do in fact need to move on from Turing's relativism and get a harder definition of what we mean by "intelligent."
 
 
Lurid Archive
19:20 / 13.05.03
No, I don't think anyone really knows how to build a thinking machine. But I don't tend to see it as a philosophical problem.

What do you mean by "Turing's relativism", IJ?
 
 
Linus Dunce
19:25 / 13.05.03
The Turing test?
 
 
w1rebaby
19:36 / 13.05.03
wouldn't it be a truly historic scientific accomplishment that is more important than the utility would imply?

It would be wonderful but it's not going to happen for a long time; I'm not even sure we'd be able to tell if it did happen. One thing I was saying to someone elsewhere about this was, we're so far from even understanding what a "thinking machine" would be like, let alone how it would work, that by the time we do understand it we'd pretty much have built one already.

Also, what do you mean about philosophical grounding? That we have no idea how to build a thinking machine? Do you really think it is a philosophical problem, rather than an engineering/mathematical one?

Well, clearly we don't have any idea of how to build a thinking machine... my main problem with "grounding" is that some people seem to think there is a thing called consciousness of which all conscious-type behaviour is a manifestation, and others (like myself) who think consciousness is an emergent property of other behaviours. That's pretty basic to consciousness theory, philosophically speaking. Minsky seems with this to be placing himself in the former camp to me; if only these youngsters would pull their fingers out and get cracking with the expert systems they'd be building Robbie in no time.


Incidentally, about the robots: while I might occasionally sneer at roboticists, it is really just a joke and I consider the study of instantiated intelligence to be extremely valuable.


(Why is it that I am typing Minsky as "Minksy" constantly today? It sounds like a furry hand puppet.)
 
 
Linus Dunce
19:36 / 13.05.03
OK, maybe I should expand a little. Turing said, in so many words as I understand it, that if it walks like a duck and quacks like a duck, then it's a duck. But this isn't absolutely true, is it?

I think we need to know more about intelligence before we can build it. And that's where philosophy comes in.
 
 
Lurid Archive
20:00 / 13.05.03
One thing I was saying to someone elsewhere about this was, we're so far from even understanding what a "thinking machine" would be like, let alone how it would work, that by the time we do understand it we'd pretty much have built one already.

Hmmm. Possibly. I very much agree with you about the emergent nature of consciousness, but that seems slightly at odds with what you have written above. Pinker in How the Mind Works, argues that the mind is lots of specialised units from which intelligence and consciousness emerge. Surely it is conceivable that making progress on what could be the specialised units of a thinking AI would present some solutions to the problems faced? The jigsaw is easier once you have the pieces? I'm actually agreeing with you, aren't I?

Turing said, in so many words as I understand it, that if it walks like a duck and quacks like a duck, then it's a duck. But this isn't absolutely true, is it? - IJ

Not absolutely, perhaps. But it isn't that bad a position, IMO. I had this debate with Quantum(?) a little while back, where we discussed "fake intelligence". I took the position that "fake intelligence" is bound to be observably fake. He never agreed, though.
 
 
Linus Dunce
20:30 / 13.05.03
"fake intelligence" is bound to be observably fake

I'd probably agree with this. But it doesn't help. If I had no idea how to build a car but could build something that looked OK but under close examination was obviously made out of, say, marzipan, I wouldn't be very much closer to understanding how to build a real one except that I would know not to use marzipan. One option down, how many to go? It's going to take an awfully long time before I get on the road.

And digital computers. Surely they are not the way to go.
 
 
w1rebaby
21:15 / 13.05.03
I'm actually agreeing with you, aren't I?

Yeah... what I'm saying is, I don't think you can really do a "top-down" thing, since to work out what your basic theory to work from is, it looks like you will have to develop it from the bottom up, and that development process will likely involve lots of experimentation and production of specialised systems and linking them together. Minsky seems to be acting like all the big questions have been solved, and all there is now to do is fill in the details.
 
 
Perfect Tommy
21:18 / 13.05.03
...unless the marzipan car (carzipan?) was roadworthy and got decent, uh, sugar mileage. Which is the point: if something functions intelligently, then it's intelligent, whether or not it's made of neurons or bits or clerks shuffling Chinese ideograms.
 
 
Linus Dunce
21:25 / 13.05.03
if something functions intelligently, then it's intelligent

Nope. It's been intelligently designed. Not the same thing at all.
 
 
Lurid Archive
21:36 / 13.05.03
I think that Tommy meant, "diplays the attributes of intelligence", which I would say is, when carefully applied, the same as intelligence.

Also, the Turing test isn't supposed to tell you how to build intelligence but, as Tommy rightly says, how to recognise it.
 
 
Linus Dunce
21:47 / 13.05.03
But no, more than that, it can only tell us what systems aren't intelligent.

So how will we know for sure when we get there?
 
 
Lurid Archive
21:59 / 13.05.03
Well, I disagree. If fridge builds a robot that lives with me, does the washing up, has lots of conversations with me in an intelligent manner, then I'd be inclined to believe that fridge had built a machine that could think.
 
 
Linus Dunce
22:10 / 13.05.03
What if you fell in love with the robot and ended up doing it on the kitchen table?

Would the machine be having sex? Or would you just be inclined to think so?
 
 
w1rebaby
22:26 / 13.05.03
I'd be inclined to think so. I'd probably charge extra for one that did that too.
 
 
Linus Dunce
23:19 / 14.05.03
Of course, I'm not saying I wouldn't do it myself.
 
  
Add Your Reply