|
|
A few points here --
First of all, present AI technology embodied in a robotic frame behaves much like an insect. It's not as if we don't have it -- it's just not so complex as to match certain styles of cognitive wit with human intelligence.
Secondly, the Turing test has supposedly been passed by chat bots.
Third, the present arguments being made do not seperate AI from the Singularity by so much. They in fact hold that once AI reaches about dog-level intelligence, it'll be a short leap to human-level intelligence, a shorter leap to super-human intelligence, and a simple job to begin manufacturing technologies beyond our understanding -- like functioning nanobots. So, you see, on the present plodding course, the most conservative estimates are in fact too optimistic -- but obeying the algorithm technological advance is following (which not only doubles often -- but exponentially, the doubling period shortens), once the first step is obtained, the logical conclusions are only a year or so apart from one another.
People keep assuming that we humans must manufacture all of this technology in order to push the Singularity out -- but that's not how it'll happen, if it happens. If it happens, it'll be self-intelligent machines with a far superior grasp on technological advance. Humans must make one significant leap -- and the rest can't help but happen from there. Like semiconductor technology, in effect. Once the processing microchip was introduced, it could not help but become exponentially more powerful over smaller and smaller amounts of time. I see no reason why AI would not behave in the same way. We only need to understand some of it -- and that's part of why this is an intimidating subject.
As far as development calculations go, everybody's got their own -- I'm going with a consensus. Admittedly, I've focused in on Von Neumann's figures quite a bit.
Anyways, what I'm really trying to say about the Us/Them dialectic is thus: This way of understanding what will occur is really only useful to us -- but I see no reason why it wouldn't resemble an Us/Them situation to humanity. Furthermore, analogues to our 'children' are nigh on useless after the first stage of development. Eventually, these machines will have children of their own, drastically different from themselves. As the degrees of seperation increase, the accuracy of our value systems and understanding will decrease.
I'm not suggesting a Terminator-style future. If anything, I believe it more likely that the machines will be indifferent to our existence after a point. The issue is not "will they hate us?". The issue is "will they see value in communicating with us?". Just because they do at some point does not mean they will continue to.
Also, I think it bears pointing out that a super-intelligent computer would have virtually no problems obtaining it's own means of production, and that computers will likely be controlling power grids and much, much more at that time. These technologies will probably develop in a loose tandem.
While the point about an AI's ability to model a human brain is valid, it does not follow what I'm trying to say: That for AI to be super-intelligent does not mean it must duplicate the brain of a human, but rather that it must develop it's own. Furthermore, the later is far more feasible than the former. |
|
|