BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


How Human would one need for "human" rights to apply?

 
  

Page: 1(2)

 
 
Lurid Archive
21:38 / 10.07.06
With regards to your point 2, grant....I see what you mean, but I think that any discussion about the extension of human rights to non-humans has to involve an amount of first principles mucking about. That doesn't mean we have to move to the head shop, though it does make it tricky to stay on-topic.
 
 
grant
01:40 / 11.07.06
Yeah, I just didn't want to get bogged down in first principles. There's some good sciency stuff in here.
 
 
Quantum
09:40 / 11.07.06
Okay, philosophical issues aside let's assess the most imminent new humans, clones and genetically engineered babies. I can't see AI or cybernetic organisms approaching sapience (or indeed sentience) any time soon, and I don't think aliens are going to arrive.

As Evil Scientist said on the page prior, humans whether from test tube or tummy have human rights already (IVF babies). The issue really is when do you *withold* human rights from somebody, but IMO the raging debates on human cloning, stem cell research and such have little to do with science and more to do with ethics.

Slight aside- there is an overlap with the Headshop because the Lab forum description includes medical ethics. To me that would include abortion, euthanasia, engineering superbabies, and possibly what organisms are entitled to human rights i.e. how far from baseline human do you have to go before the rules change? If we want to encourage more hard science discussion maybe we should lop the end of the description.
 
 
grant
12:16 / 11.07.06
I still think medical ethics belongs pretty squarely right here -- I (as person, not necessarily as mod) get more out of the medical side than the ethical side. Teasing the implications out of the science.

The issue really is when do you *withold* human rights from somebody, but IMO the raging debates on human cloning, stem cell research and such have little to do with science and more to do with ethics.


Well, they have to do with the idea of what-constitutes-human, which is both a scientific and an ethical question. To a certain degree, I think America gets hung up on the idea that rights are "imbued by our Creator," with some folks thinking, OK, well, there's this metaphysical category "God" which has attributes of personhood, including some kind of agency (Supreme, Providential agency) and a record of His Divine Thoughts in the scriptures, and some folks thinking, OK, well, there's this metaphysical category which in the past was interpreted as "God" and in which our rights exist waiting to be recognized/formed/seized by our consciousness. I'm pretty sure the Founding Fathers who wrote about the imbuing of inalienable rights would have been split along that line of what exactly was meant by a Creator, which makes for a pretty hairy establishment of law.

The best scientific angle I've seen on human rights (it was in the context of abortion, and it was basing the definition of human rights on consciousness (that which is capable of recognizing/forming/seizing rights) in the form of fetal brain activity, which starts at around six weeks of development.

I can't entirely discount the Catholic model, which is all about potential. A future human deserves the right to exist as much as a current human.

In practice, I'm sure with an AI, rights would be granted at the moment at which the entity demonstrates an ability to ask for them. What is meant by "demonstrate" and "ask" will have to be hammered out, and will become the definition for "human" (human is that which can demonstrate an ability to ask questions, rather than parrot questions as a function of some kind of program).

With a genetically/technologically modified human, in practice, I'm pretty sure the rights will be assumed to exist until the entity demonstrates over and over and over again that it's incapable of ever, ever becoming conscious enough to ask for rights. At which point, torch-wielding mobs will carry Justice Scalia's litter out to the old farmhouse and burn the monster as the cameras roll.
 
 
Quantum
13:22 / 11.07.06
I'm pretty sure the Founding Fathers who wrote about the imbuing of inalienable rights

I always thought they were influenced a lot by what they were attempting to oppose e.g. the divine right of kings vs. all men are created equal, which is my problem with any kind of natural rights. It might seem self evident to some but that doesn't make it so. But perhaps I'm digressing.

I like the idea that an entity that deserves rights is one that wants rights, because I think it's the intentionality that counts. Having beliefs and desires and such is a vital factor in determining rights, as long as we're not talking about the thorny philosophical issues (e.g. the rights of people in a persistent vegetative state).

Does anyone know the current state of AI tests? The Turing test has a deliberately high threshold, shown by how many people fail it, so that's not a guide to when to give rights to an AI. I wonder what the bleeding edge thinking is on it (I read a fantastic short story once about an AI that was in love with it's creator but was honest, and so couldn't pass the Turing test and got deleted. Tragic.)
 
 
<O>
21:38 / 11.07.06
The Turing test has a deliberately high threshold, shown by how many people fail it, so that's not a guide to when to give rights to an AI.

It's also not necessarily an indicator of whether a computer is intelligent, either. I'm thinking of Searle's Chinese Room argument against strong AI. In short, imagine that you are inside a room. You get a series of Chinese characters, and have to reply with the correct response, also in Chinese. Now, you don't know any Chinese, but there is a large book that contains a table of input strings and the appropriate responses.

So, you're able to output the correct characters, simulating a conversation with whoever is outside the room, despite that you don't actually understand the contents of the interaction. Searle argues that this is analogous to a computer that could pass a Turing test--despite the appearance of intelligence, there's no understanding taking place. Given this scenario, one could easily imagine a computer capable of passing a Turing test, and specifically programmed to demand its rights be recognized, but I don't think that it would be deserving of any rights it might ask for.

I'll reiterate my contention that a capacity for novelty should be a prerequisite for declaring something 'intelligent', and thus deserving of rights. I'm not saying that an intelligent computer needs to be able to improvise a jazz flute solo (although that would be pretty awesome!), but it should be able to interact in ways in which it was not specifically programmed to act. Again, I don't think this is the sole requirement, but it certainly seems like a part of what would be needed to declare a computer 'intelligent'.
 
 
Quantum
21:52 / 11.07.06
The Chinese Room also applies to the problem of other minds but again it's more philosophy than science. Let's assume for the sake of this thread you could have strong AI somehow, that did have the capacity for novelty and self reflection, intentional states and conscious thoughts- what would be the issues around granting it/hir rights like a person?

'intelligent', and thus deserving of rights.

Why does intelligence result in deserving rights? Would you grant rights to an entity that had intellect but no emotion? I'm thinking that some sort of empathy has to be part of understanding morality (a sociopath is amoral for example if I'm not horribly misunderstanding the term), which isn't the same as intelligence.
I wouldn't want to give human rights to an entity with no comprehension of compassion or mercy. Look at Skynet, or the villain from that dreadful I Robot film, or even Lang's Metropolis- the emotionless killer robot is a standard sci-fi trope.
 
 
<O>
07:17 / 17.07.06
Would you grant rights to an entity that had intellect but no emotion?

Maybe, if it seemed intelligent enough. My issue with using emotions as a measure of when something deserves rights granted it, is that there are no requirements that humans have to demonstrate emotion to be awarded their rights. Even in your sociopath example, the sociopath has rights, in that we extend hir due process and a fair trial and all that when ze flips out and goes on a serial killer rampage.

Emotion and intelligence may not be the same thing, but they seem somehow linked. Looking at the animal world, one doesn't seem to observe emotion outside of the higher-order mammals (although I'm told that the octopus is as smart as a housecat), i.e. the ones with the most intelligence. It's hardly a scientific analysis, but I suggest that for something to be capable of emotion, it must also have a corresponding amount of intelligence.

Emotion also seems much more difficult to quantify (not that intelligence is a cakewalk) for our purposes. How does one test for the ability to show mercy?
 
  

Page: 1(2)

 
  
Add Your Reply