BARBELITH underground
 

Subcultural engagement for the 21st Century...
Barbelith is a new kind of community (find out more)...
You can login or register.


FMRI scans to read people's intentions

 
 
Closed for Business Time
10:28 / 09.02.07
Scientists at Max Planck Institute for Human Cognitive and Brain Sciences in Germany, who led the study with colleagues at University College London and Oxford University, have created a procedure that can spot people's intention in a prospective task from fMRI readings of the prefrontal cortex.

Summary, Guardian article, and a UCL Institute of Neurology news-flash of same.

Are we moving into a Minority Report era of precrime units? What are the ethical, practical, political and social ramifications should a fully fledged technology of this sort be routinely used in crime-fighting, intelligence-gathering and similar activities?
 
 
ORA ORA ORA ORAAAA!!
12:01 / 09.02.07
Well... to be honest, this doesn't really seem all that different to reading a monkey's intention to move an electronic arm somewhere specific, without said monkey making any particular movements. As with the monkey (in that study - you know the one I'm talking about! I'll dig up a reference if this is all entirely unknown to you), this study essentially picks between a limited number of options, which are represented in certain patterns of activity in the brain, under conditions where everything but the subject's choice is kept as stable and unchanging as possible.

Which is a tremendously different thing in principle and in practice to determining an intention from a potentially infinite field of possible intentions under a potentially infinite number of situations and stimuli.

To determine what a particular pattern of neural activation means, outside of an experimental context with minimised variation, requires, basically, an understanding of the neural correlates of consciousness and a way to pick those out, from outside someone's head. Understanding how thoughts come out of chemical reactions is one of the big problems, and I don't think anyone will solve it anytime soon, but even if they do, the practicalities of bran-scanning on the kind of detailed (i.e. neuron-specific) level you'd need to actually read the brain, if you could, mean that there's basically no way in hell that this sort of thing could be used to randomly nab people in the street who idly think "hmm, I might go kill me a president today!".

Basically MRI techniques are quite low-res, and the best techniques we have (SQUID!) require a giant helmet supercomputer in a specially shielded room in a location as far away from civilisation as we can get, because the electrochemical impulses of the brain are quite faint, and reading them is quite prone to interference. There are no techniques I am aware of which operate further away than the surface of the head, due to the tremendous complexity and weakness of the signals being measured.

So, erm. This is cool. I approve of all this kind of brain research because of my interests, which include improving computer interfaces for severely disabled people, and this sort of direct brain-machine interface is potentially the only kind of interface available for some people. If the reading of intentions, or even of plans for muscle activity (actualised or not) can be refined then there's a whole new world opened up for some of the most disadvantaged people around (in some respects - obviously most people who are, for instance, quadriplegic, are almost by definition socio-economically advantaged over many people in the world, or just plain dead, but I hope you see what I mean).

I don't think this is opening the way for Minority Report precrime type situations (for one, that relied on precognitive metahuman-types, rather than reading intentions from people's brains directly - an entirely different kettle of fish). I don't think there will be ethical issues with this per se because this sort of technology requires either the consent of the subject or the use of force and restraint, at which point there are ethical issues, but not with the technology, just the guy strapping you down. Should someone develop a reliable, high-resolution, ranged brain-scanner, though, then certainly some ethical considerations spring up, but I think that sort of thing is about as likely as, oh, hand-held scanning electron microscopes, for approximately the same reasons. But, should it occur, and should we somehow develop the ability to translate neural firing into intelligible-to-others though/language, that's when we need to worry (or, master our fears so it doesn't show up on the scan, so we don't get arrested, because if we'd nothing to hide, we'd nothing to fear, right, citizen?).

I am, of course, open to discussion re: ethics, practicality, etc.
 
 
Closed for Business Time
13:28 / 09.02.07
RFR - thanks for that very thorough and well thought out post. I agree with you that the reportage of this focusses on features of the technology that are far away in time and may not be realisable at all - as you rightly say, present tech requires highly specialised equipment and personnel, and even then it can't do more than pick out a neuronal activation trend that might mean one of two things, with about 70% accuracy. Not exactly a recipe for picking out a would-be assassin. However, I was piqued by this quote from one the leading authors of the paper, Dr. Haynes. He says to the Guardian

"We see the danger that this might become compulsory one day, but we have to be aware that if we prohibit it, we are also denying people who aren't going to commit any crime the possibility of proving their innocence."

Which made me think.. What impact will these kinds of technologies (alongside genetic screening for "unwanted" phenotypes and genetic therapy) have on the way in which we identify suspects, and can it have an impact on the principle of "innocent until proven guilty"?
 
  
Add Your Reply