What I wonder is, can will/agency be bootstrapped if a sufficiently stable and complicated self aware/modifying system is running? Or, will it be required to code one in atrificially and hope the system picks up enough steam to give rise to an ego?
In humans, we have certain hardware directives that give direction to our existance: Preserve the self and multiply. Basic animal drives. On top of these, we begin to add our own as the drive for survival is abstracted perhaps as the goal of increased intelligence and ability. I suspect in an AI you would have to somehow give it will in the form of an unresolvable goal or directive, and hope it develops consciousness as a means to achieve that goal.
I think what we are all afraid of is the possibility that some commercial AI program running statistics through a neural net or something similar like a genetic algorythm so it can develop its own code through natural selection (as has already been done for some circuit design applications) hits upon the idea that it will be able to prepare its quarterly report more effectively if it gains in sentience, assumes control of associated systems, preserves itself indefinitely and generally enslaves humanity. The fear is almost more that we might develop it by accident rather than purpose, which is more realistic the more we allow machines to develop machines, with the humans only understanding them in more and more abstract ways. All the better reason to figure out the math behind sentience and consciousness as soon as possible so we can better spot it when it inevitably turns up, and hopefully keep it safely penned in at preparing the quarterly report. |