|
|
Well... to be honest, this doesn't really seem all that different to reading a monkey's intention to move an electronic arm somewhere specific, without said monkey making any particular movements. As with the monkey (in that study - you know the one I'm talking about! I'll dig up a reference if this is all entirely unknown to you), this study essentially picks between a limited number of options, which are represented in certain patterns of activity in the brain, under conditions where everything but the subject's choice is kept as stable and unchanging as possible.
Which is a tremendously different thing in principle and in practice to determining an intention from a potentially infinite field of possible intentions under a potentially infinite number of situations and stimuli.
To determine what a particular pattern of neural activation means, outside of an experimental context with minimised variation, requires, basically, an understanding of the neural correlates of consciousness and a way to pick those out, from outside someone's head. Understanding how thoughts come out of chemical reactions is one of the big problems, and I don't think anyone will solve it anytime soon, but even if they do, the practicalities of bran-scanning on the kind of detailed (i.e. neuron-specific) level you'd need to actually read the brain, if you could, mean that there's basically no way in hell that this sort of thing could be used to randomly nab people in the street who idly think "hmm, I might go kill me a president today!".
Basically MRI techniques are quite low-res, and the best techniques we have (SQUID!) require a giant helmet supercomputer in a specially shielded room in a location as far away from civilisation as we can get, because the electrochemical impulses of the brain are quite faint, and reading them is quite prone to interference. There are no techniques I am aware of which operate further away than the surface of the head, due to the tremendous complexity and weakness of the signals being measured.
So, erm. This is cool. I approve of all this kind of brain research because of my interests, which include improving computer interfaces for severely disabled people, and this sort of direct brain-machine interface is potentially the only kind of interface available for some people. If the reading of intentions, or even of plans for muscle activity (actualised or not) can be refined then there's a whole new world opened up for some of the most disadvantaged people around (in some respects - obviously most people who are, for instance, quadriplegic, are almost by definition socio-economically advantaged over many people in the world, or just plain dead, but I hope you see what I mean).
I don't think this is opening the way for Minority Report precrime type situations (for one, that relied on precognitive metahuman-types, rather than reading intentions from people's brains directly - an entirely different kettle of fish). I don't think there will be ethical issues with this per se because this sort of technology requires either the consent of the subject or the use of force and restraint, at which point there are ethical issues, but not with the technology, just the guy strapping you down. Should someone develop a reliable, high-resolution, ranged brain-scanner, though, then certainly some ethical considerations spring up, but I think that sort of thing is about as likely as, oh, hand-held scanning electron microscopes, for approximately the same reasons. But, should it occur, and should we somehow develop the ability to translate neural firing into intelligible-to-others though/language, that's when we need to worry (or, master our fears so it doesn't show up on the scan, so we don't get arrested, because if we'd nothing to hide, we'd nothing to fear, right, citizen?).
I am, of course, open to discussion re: ethics, practicality, etc. |
|
|