Saturday, May 30, 2009

Jane O'Grady - Can a machine change your mind?

Very cool article - the mind is certainly not the brain. The reductionist thinking of neuroscience is just silly sometimes. Strangely or maybe coolly, this comes via Open Democracy.

Can a machine change your mind?

The mind is not the brain. Confusing the two, as much neuro-social-science does, leads to a dehumanised world and a controlling politics

‘Can a machine read your mind?’ – the title of a recent (February 2009) article in the Times -- is meant to be sensational but is similar to hundreds of other articles appearing with increasing frequency, and merely repeating a story that has been familiar for the last 50 years. ‘It’s just a matter of time’ is the assumption behind such articles – just a matter of time before the gap between physical brain-stuff and consciousness is bridged. The Times article plays up the social interest angle of its story by describing experiments in which people’s brain activity is taken as proof of their guilt or innocence of crimes, or in which a computer ‘could tell with 78 per cent accuracy’ which of a number of drawings shown to volunteers was the one they were concentrating on ...

There are in fact even more extreme examples than those in the Times article of how neuro-science and social science increasingly overlap. Alan Sanfey, of the Neural Decision Science Laboratory at the University of Arizona, for example, describes a neuro-economic analysis of an Ultimatum Game in which one person is given the power over another to make an offer to split £100. If the other rejects the offer, no one gets anything. So far so familiar -- to other behavioural economics experiments that study the norms of fairness. One neuro-twist to the story, though, is that experimenters can make subjects more or less willing to accept unfair offers by subjecting their brains to Transcranial Magnetic Stimulation (TMS), non-invasive and painless stimulation of the brain.

At a recent meeting of a Conservative think-tank in London about the possibility of reducing concepts of moral action entirely to scientific explanations of behaviour, one politician joked about the policy applications of Transcranial Magnetic Stimulation techniques. The world of understanding, cognition and even action can be managed by manipulating atoms rather than arguments ("opium of the people" in reverse -- chemicals inducing meaning, instead of meanings acting chemically). Even where such claims seem strongest and most striking, it is important to ask what exactly they amount to. Can we really move with ease from the world of atoms to the world of meanings? Or is any apparent smoothness due to the conceptual confusion involved in applying neuro-scientific discoveries to meaningful questions -- so that in the transition we inevitably lose essentially human parts of existence? These questions, newly pertinent because of scientific and social developments, have been anticipated in the philosophy of mind of the past 70 years.

In the late 1950s, philosophers like J J C Smart demanded why -- given the advances of science, and its success in establishing the identity of commonsense with scientific concepts -- specific states of consciousness (pain, seeing a yellow after-image) should not in fact ‘turn out to be’ specific brain states. Lightning has ‘turned out to be’ an electrical discharge, and heat to be molecular motion. In each case, said Smart, the scientific term obviously doesn’t mean the same as the commonsense term, but it does refer to the same phenomenon. Science tells us what lightning and heat actually are. Similarly, pain doesn’t mean brain state 7,008, and the person talking about her pain may well not know that what she is talking about is brain state 7,008 (any more than, prior to Alexander von Humboldt in the 19th century, people knew they were talking about H2O when they talked about water), but that is what she is ultimately talking about.

Biologists, neuroscientists, and scientised people in general, are often perplexed, even exasperated, that there should be any objection to some version of this Smart-type identification of brain states with mental states. They pat philsophers’ hands and tell them not to bother their clever little heads about the problem since it is a scientific one, and nothing to do with philosophy. ‘Just a matter of time’ again. But it surely is unavoidably a philosophical problem, since we need to know what exactly we’re dealing with. What would count as knowing that a brain state/mental state identity had been established? How could it be proved that brain state 7,008, for instance, is precisely the pain I’m having now? Well, is the usual answer, it’s just a matter of sophisticated technology being developed to correlate a specific site in the brain and movement of neurones etc with the occurrence of the pain, showing that each is happening at the same time, in the same place. Yes, but how can more than correlation be established? And correlation of time is hard enough, what could correlation of place come to?

Smart seemed to be conceding the correlation point when he admitted that what he postulated about brain state/mental state synchronisation could equally amount to epiphenomenalism as to identity (i.e., to the view that, with any neural event, there is also a mental, causally inactive, spin-off). Occam’s razor was his clinching argument for opting for identity – get rid of clutter and believe as simple and economic a theory as possible.

Which would be fine if, as some philosophers like Thomas Nagel have pointed out, your razor didn’t actually cut out the essential thing. How do we get rid of the sense that there always seems to be something left over from the straightforward conflation of brain state activity into mental state occurrence? In The Blue Book, Wittgenstein imagines a scenario in which scientists open someone’s head and observe his functioning brain, while he, by means of mirrors, observes it at the same time, all observers equally able to watch neurones firing, synapses opening, etc. In principle, why not? But, as Wittgenstein says, the brain-owner, unlike the scientists clustering round him, is observing, or experiencing, two things rather than one. He can observe that when he feels, or thinks about, certain things, certain activities occur in his brain at the same time. He experiences feeling or thinking in certain ways, and also he experiences observing his brain working in certain ways. The scientists only experience observing the brain working. What one could add to this is that if, at some time in the future, the subject whose brain has been observed were to see a video of what had happened during the brain-inspection, he (unless his memory were perfect or the experiment very brief) would be in the same position as the observing scientists were at the time – he would have to deduce what he had been thinking about or feeling then from what he now observes of his brain in the video.

Given the brain’s material object status, it wouldn’t, and, for identity theorists, shouldn’t, matter whose brain is being observed, and by whom, owner or non-owner, when it comes to ‘recognising’ mental states as brain states, and vice versa. But of course, it does matter – it makes all the difference. Also, as it should seem too juvenile to add, suppose the brain-owner were an expert on the history of the Restoration, and had been thinking about his new research during the experiment, the observers at the time would become no whit more knowledgeable about Restoration England. Oh well, might be the riposte, if we knew the entire history of the brain-owner’s history-acquisition, then we could read off from the lighted-up areas of his brain … etc. ‘Read off’ is still ‘deduce’, and it would require a lot of separate learning on the part of the brain-observer for her to be able to catch up with the brain-owner’s knowledge.

The observer (of the brain or brain-scan) has to infer a brain/mental state correlation, relying on the brain-owner’s report, and/or on induction – observation of similar brains in similar contexts, with a mass of correlations and brain-owners’ reports being accumulated and compared. In the examples in the Times article mentioned above, the experimenter needed to infer from movements (or lack of movements) in parts of the brain to the guilt or innocence of the brain-owner, or to rely on the experimental subject’s confirmation as to whether the drawing she seemed to be concentrating on actually was the drawing she was concentrating on. Reliance on both inference and induction surely makes ‘mind-reading’ by brain-scan open to the same sort of problems as the notoriously suspect lie-detector tests that already exist – that the experimenter’s deduction can be mistaken due to ways in which the experimental subject’s brain is (or does things) different to what is standard or expected. Anyway, the initial expectation of identity theorists that the regular coincedence of a particular type of brain state with a particular type of physical state could eventually be established (not just regular in one individual brain but across individuals’ brains in general) has largely been abandoned as impossible to achieve.

Leibniz made the same point as Wittgenstein when asking us to imagine somehow being able to wander about inside someone else’s (or it could be your own) brain. You can observe all sorts of things pulling and pushing, he says, but cannot observe the thoughts. Which is why spatial correlation of a brain state with a mental state sounds even more disorientatingly weird than temporal correlation, horribly like a category mistake. To claim, as Smart does, that sensations and thoughts are just processes in the brain makes sense in one way -- without brain movements consciousness wouldn’t happen; but what the consciousness is of, the content of consciousness (the beach on Formentera in 1983, some of your religious beliefs or disbeliefs, the difficulty of solving problems of consciousness) – is that in the brain exactly? And isn’t your pain felt in your tooth and your pleasure located in your breasts?

Just as you couldn’t pick out the precise area in a brain where a practising Jew’s disbelief in the resurrection of Jesus (or a physicalist’s disbelief in mind-body dualism, or an enamoured man’s feeling of love) is located, or that becomes activated when Jesus’s resurrection (or dualism, or the beloved) is mentioned, nor more could you get the practicing Jew to believe in the resurrection while preserving his other beliefs, or convert the physicalist into a dualist, or get the man to fall out of love, by tampering with or obliterating specific parts of her or his brain activity. A belief is part of a whole theory or system of beliefs, a feeling of love part of life history, memories, beliefs, etc. Given what is called the holism of the mental, a holism both of abstract belief systems, and of concrete, personal life histories, you couldn’t alter either just by tampering piecemeal. (obviously you could by damaging the brain so severely that the person became incapable of coherent thought or speech, actually wiping out wholesale the capacity to remember, believe, feel as others normally do, and the person specifically had done.) Another reason why at best you get correlation or causation, not identity.

It seems more feasible, perhaps, to seek to establish mental state/brain state correlations in the case of visceral, body-related mental states, like pain, than in the case of contentful (‘intentional’) mental states that overarch, and invoke, other parts of a person’s life and belief-systems. Apart from the obvious fact that there is no neat division here but overlap and further diversity, these two sorts of mental state have at least one thing in common – can either ‘a thought [or this particular thought] about the beach in Formentera’, for instance, or ‘pain [or this particular sensation of pain]’, be on a par with lightning, heat or water? How far is consciousness comparable to any physical phenomenon? Smart seems to have an uneasy inkling of their non-comparablity when he makes a point of seeking to ‘forestall irrelevant objections’ by pointing out that he is not talking about ‘the publicly observable physical object, lightning’ but about the sense datum or the brain state (which are, as he is of course arguing, one and the same) that are caused by lightning. Surely he is stressing this very obvious distinction because he has a worrying sense (anticipating Saul Kripke (see especially lecture 3) that there is not an equivalence between the equation: ‘lightning = an electrical discharge’ and the equation: ‘this particular (or this type of) mental state = this particular (or this type of) brain state.

‘Lightning’, ‘water’ and ‘heat’ are commonsense terms for phenomena that are, for scientific purposes, more accurately called ‘electrical discharge’, ‘H2O’ and ‘molecular motion’. The lightning and water equations only seem analogous to a mental state = brain state equation, because the common sense terms ‘lightning’ and ‘water’, unlike their respective scientific terms, somehow contain (and therefore smuggle in) the sense of what lightning and water look like. Therefore, to say that lightning is an electrical discharge, or that water is H2O, adds objective knowledge of what the phenomenon really is (lightning isn’t after all something hurled by angry gods). But how can ‘irritation at his assumption that this problem can be so easily solved’ or ‘remembering how we sat under the honeysuckle near Orford’ be more illuminatingly called ‘brain state 50,987 with x neurons doing y [and however complicated and precise you want to make this description]’? What exactly would be added to your feeling or memory by discovering (if you could) that it was a movement of atoms?

Is a conscious state really equivalent to lightning, heat or water? For, as Kripke pointed out, now, once it has been discovered that water is H2O, lightning is an electrical discharge, heat is molecular motion, we all know it to be the case that whenever you get water you get H2O etc. and anyone who doesn’t is ill-informed. Only ignorance prevents the perceiver of water, lightning or heat from knowing these respective identity statements to be true. Different meanings, same reference. But that surely doesn’t apply in the case of sensations, thoughts, memories, etc.

Water seems a certain way to us, and science, in its attempt to produce what Nagel calls ‘a view from nowhere’, ignores and extracts from the seeming, in order to get at what water really is, irrespective of the viewer's race, sex, age, or other subjective idiosyncracies, irrespective in fact of any viewer whatever. But we can't subtract the viewer when dealing with consciousness. Consciousness is unavoidably subjective and about how things seem, what things seem like to the conscious person. Of course another conscious person may deduce, or be informed about and thereby make deductions about the truth and quiddity of, another conscious person’s thoughts or feelings. And of course in some way consciousness may be caused by, or correlated with, the brain's microscopic properties. But (as Nagel hardly needed to remind us) what it feels like to be conscious of something, or to be in a particular state of pain or serenity, surely goes beyond those brain properties. A scientific description of what happens in the brain when someone has a certain thought or experience seems inevitably to leave out what the thought is about or the experience is like. Once again, there’s something left over, something which, if the person were observing their own brain states, they would be having in addition to seeing neurons fire and synapses wiggling.

What more would the person conscious of pain, of the memory of Formentera in 1983, of believing in physicalism, know about the pain, the memory or the belief, either as experienced or as described, if knowing that any of these ‘is’ brain state 3,9087? In what sense ‘is’ any of them a specific brain state or set of brain states?

As Kripke said, when God (obviously metaphoric here) created the world, all he needed to do to create heat was to create molecular motion (which is what heat is) but he needed to do something extra in order to create a sensation of heat. Ditto with creating water, it was just a matter of creating H2O, but the sight, sound, taste, feel, smell(?) of water were an additional labour, actually requiring the creation of sentient organisms. (In a way, heat is in a slightly different category from lightning and water. The latter two phenomena (especially water) can be more easily imagined as unperceived entities than heat can. With heat, the objective phenomenon is much more inextricably interwoven with the subjective effect of it, which is why Kripke’s use of heat as an example can be misleading.)

The most irritating (to us lay people) aspect of philosophical and scientific attempts to reduce the mental to the neural, and to squash down human beings into being on all fours with other physical things, is that their proponents nearly always say that actually they are just putting the truth about consciousness more clearly and taking nothing away from our experience. Like politicians deviously withdrawing privileges, they expect us to be quite happy about this. Some developments of identity theory, however, are more upfront. They force consciousness into equivalence with lightning and water by impugning the ignorance of us ordinary people. The way we talk about sensations, memories and beliefs is, say eliminative materialists, hopelessly antiquated, a form of ‘folk psychology’ as hidebound and superstition-laden as talk about witches, or about epileptics being possessed by devils. ‘Folk psychology’ is a theory about how humans function, they say, that is pathetically inadequate in both describing and predicting. In time, a more scientifically sophisticated vocabulary will replace it.

Really? So we were wrong all the time about our memories and our passions? What sort of a world, I wonder, do these eliminative materialists envisage with their revised vocabulary about mental (or rather neural states). What exactly would be doing? What would be the point of training ourselves, or being trained, to report on our brain states?

The eliminative materialists may base their argument on the perspicuous fact that some mental terms do trail theories behind them, and can therefore be replaced, extrapolating from this the notion that such terms can be wholesale eliminated. ‘Depression’, ‘grief’, ‘melancholia’, ‘black bile’, ‘accidie’ are, it is true, not synonymous, nor do they, probably, refer to precisely the same phenomena; but does that mean that there are no such dark phenomena? ‘Dark’ is not just purple passagey – these, like many mental states, arent exactly describable except by pictorial and other metaphors. But I wonder how eliminative materialists would replace Macbeth’s description, or expression, of depression, melancholy, black bile or whatever in the ‘Tomorrow, and tomorrow, and tomorrow’ speech, or George Eliot's apercu on the insincerity of spontaneous feeling.

Metaphor bridges the gap between secluded mental states by invoking physical things that are open to all (whatever the likelihood of their being differently experienced). If indeed ‘folk psychology’ could be eradicated, along with all the metaphor and poetry that has grown up around it, then surely, with the irrepressibility of weeds, metaphor and poetry would spring up again around brain state terminology. But how would we be induced to abandon ‘folk psychology’ in the first place. Eliminativism seems to share the worst aspect of Cartesian dualism – its hopeless seclusion. Our brain states, although in principle open to anyone’s inspection, are in practice hidden. Why would we go the trouble of talking about our inner states, sensibly say objectors to dualism, unless in the context of sharable, palpable experiences? Even more ridiculous, by the same token, is the idea that we could be taught about, and discuss, brain states. Why would we ever dream of doing so?

Worse than this, would be the loss to morality and self-creation. Suppose, in a juxtaposition of eliminativism and Freudianism, a woman’s amygdala lighted up in the anger zone even as she was professing not to be angry. She is duly given the expert’s better-informed diagnosis of her state of mind. But is that an advantage, particularly if she accepts the diagnosis and acts on it. Denial of anger may sometimes be dishonesty or self-deception, but may also, even while being both, be part of the suppression of anger that is so imperative in civilised life. What about if a man objecting to a situation of social injustice were subjected to Transcranial Magnetic Stimulation to obliterate his present feeling of dissatisfaction and induce a feeling of pleasure? Surely what actually matters to him is the cognitive aspect of the dissatisfaction – the reason he was feeling it.

The new neuro-social-sciences are the latest of many attempts to naturalise the human---to make every aspect of our lives and selves comprehensible merely as subjects of scientific explanation. The social consequences of the naturalistic program make it especially important to understand its philosophical limits. Not only do we become experimental subjects, but we very easily become subjected -- to the particular types of control that scientific understanding invites, especially the "medical model" of the expert which offers the 'patient' diagnosis, prophylaxis, prognosis and cure. This may produce wonderful results in the right context, but should be tightly confined within the world of atoms; in the world of meanings, its essentially metaphorical status needs to be always understood. A naturalised, rather than thoughtful and deliberative politics, is not only creepy, it is incoherent. Ironically, it substitutes a medical metaphor for meaningful argument.

Hard-line identity theorists, and eliminativists above all, don’t appreciate how much they would change things if indeed we could come to believe and implement their theories. Our world would increasingly be leeched of meaning, morality, dignity and freedom, and if we rejected folk psychology in favour of scientific terminology about brain states, not only would we know less, not more, about ourselves; we would also have less to know about, because we would be less.

(All pictures have been taken from Alan Sanfey's very interesting presentation of Ultimatum Game results, here)


No comments: