Monday, July 19, 2010

Steve Zara - The Blue Brain Blues: Materialist ethics and simulated minds

http://in-spiros.com/artofallowingmindset/wp-content/uploads/2009/04/quantum-consciousness.jpg

Interesting article posted over at Richard Dawkins' site - Richard Dawkins Foundation for Reason and Science - a brief look at the moral implications of neural simulations. As we develop computers and neural simulations, there are bound to be some more issues raised about what consciousness might be.

A materialist sees it this way: "A materialist will have to come to the conclusion that conscious awareness, our subjective experience of the world, is what you get when certain types of information processing and storage happens."

In this view, the only real question is this: "And how similar does the behaviour of the artificial system need to be in order for us to consider what it does as equivalent to the biological system, and not just a coarse model?"

Personally, I do not buy into this reductionist perspective, yet I also no not tend to think there is something "other" that makes us conscious (a soul, for example). So I am not clear exactly on where I sit regarding these issues.

The Blue Brain Blues - Materialist ethics and simulated minds

A materialist does not believe in magic. A materialist does not believe that anything more than the interactions of forces and particles in the physical world is needed to explain that world and everything in it. Not many people are materialists; the majority of those alive, and who have lived, believe that there are extra aspects to the world, usually termed “spiritual” or “supernatural”. But we who don't subscribe to the idea of those extras are increasing in number.

However, I am not going to argue here the truth or otherwise of the materialist view. What I want to show is that it has consequences. Serious moral consequences, and in an area of research and technology that will be of increasing importance to humanity. The moral consequences may be surprising, and yet I will suggest that they follow inevitably from the materialist position. And, for reasons I will explain later, they may - and I feel should – change the way that certain scientists and technologists approach their work.

I'm going to start by asking one of the most difficult scientific questions: what does the brain do? We can come up with all kinds of everyday answers: it produces consciousness, it results in the mind, it allows us to have experiences, it retains memories, it gives us the ability to imagine, to dream. Those are all true, but I want to consider things at a more fundamental level. One way of looking at this question is to say that the brain is a way of helping our genes to survive. But that view does not focus on the specific nature of the brain. We can start to get an idea by looking at what it is made of. It is made of neurons and supporting tissue. Neurons are cells that respond to and process electrochemical signals. They can change their internal state and how they link to other cells in response to signals. We are able to replicate some of their behaviour in simulations called “neural networks”, which seem to be able to process signals and change state in ways similar to their biological equivalent. But how similar? And how similar does the behaviour of the artificial system need to be in order for us to consider what it does as equivalent to the biological system, and not just a coarse model?

But back to neurons. What they seem to be doing is processing and storing information. We analyse the world, and we can recall aspects of it. Our brains are far from perfect, but then evolution rarely requires or produces perfection. Even so, our brains are capable of amazing feats of computation and memory, as often highlighted in the capabilities of certain gifted people. What is crucial for my argument here is that the materialist viewpoint is that there is nothing extra to what the brain is doing other than this processing of information. As the brain obeys the laws of physics, there is no extra physical aspect to what is going on. Consciousness is not some sort of add-on to what is happening in the brain. It is not some sort of “energy” given off by neurons. It does not involve some magic from another realm. The brain consists of molecules and electrochemical interactions behaving in a way that can process and store information. And that is it. So, a materialist will have to come to the conclusion that conscious awareness, our subjective experience of the world, is what you get when certain types of information processing and storage happens. (Subjective experience may not “feel” like just information processing, but then what should it feel like? But that isn't the point anyway – I'm talking about the consequences of the materialist view of the world, not what it feels like to be an aware being in the world.) So, let's get back to neural networks. Specifically, artificial neural networks. Let's assume we get to a situation where we can produce small artificial neural networks that seem to process and store information in the same way as their biological equivalents. That would be scientifically exciting, as it would suggest that we understand all of the important functional aspects of the biological system. That's one of the main aims of simulation research in all areas of science – to see what aspects of the simulated system are necessary to reproduce in order to get realistic behaviour in the simulation. Sometimes this can be very successful, as in molecular modelling, and sometimes it can be less successful, showing that the physical system is hard or impossible to reduce to a simplified model (as in weather prediction). But let's assume success with the neural networks. Let's assume we do get close-to-identical behaviour to that of the small biological system. What to do next?

Well, at least one research group has a big idea. To build models based on the reverse-engineering of entire mammalian brains. It's called the Blue Brain Project: http://bluebrain.epfl.ch/

And here the ethical alarms should start to ring loudly for the materialist. Let's summarise where we are in this argument, and in the hypothetical situation, so we can see why:

  1. A materialist believes that our minds, including our awareness and sensations, are what happens when certain types of information processing occurs.

  2. (Hypothetically) we have artificial systems that we believe process information in a way that is pretty much identical to their biological equivalents.

  3. A research group (that assumes that they can achieve, or have achieved, stage 2) wants to construct an artificial system that processes information just like a biological mammalian brain.

In the past we have not treated animals well in science. But there are now protocols, at least in Western countries, that severely restrict what can be done in the name of research, with the intention of removing or minimising suffering. Mammals in particular are believed to be able to experience pain and suffering. Even a small animal is expected to be treated with care, and experimentation is regulated.

But what of their silicon equivalents? What happens when an information processing system that is functionally equivalent to the mammalian brain can be started up in a few milliseconds, and any desired neural input generated? Given the exponential rise in computing power, and if we assume the hypothetical success of neural network systems, that is a feasible situation with decades. Even if the Blue Brain project runs out of steam, other groups are likely to have a go at this.

Surely, to a materialist, this is not a morally acceptable situation to be left unregulated. Because the materialist would come to the conclusion that the artificial silicon and software systems have an equal ability to experience pain and suffering.

I'd like to clear a few things up at this point. The argument is not about artificial intelligence designed from the bottom-up with an understanding of the parts involved. The argument does not require that we have an explanation of how the brain produces experiences, or what particular pathways are involved. All it requires is the combination of a materialist view of the world with the ability (and intention) of some to accurately reproduce the information processing of neural systems by reverse-engineering (which is the aim of the Blue Brain project).

This might sound esoteric; a purely hypothetical argument of little practical interest. But it is far from that. The reason is that science and reason have to work based on what evidence we have, and to an extent morality should be based on the precautionary principle. We have no evidence for a non-materialistic view of what the brain does. It may not feel like we are “nothing but” the processing of information by certain types of cell, but we have no evidence that this is not the case, no matter how strongly we may believe it as individuals. Therefore, until research shows otherwise, we have to assume that a successful Blue Brain-type simulation of a mammalian brain would have subjective experiences, and could suffer. We have to start to consider if it is ethical to simulate mammalian brains before we have a good understanding of what neural activity results in sensations. Indeed, as governments are currently attempting to cut back on the number of animals used in research (a policy that is likely to continue), we have to consider whether it is morally acceptable to simulate mammalian brains at all. It's not surprising that there have been many significant philosophical and scientific discussions about the nature of consciousness, and whether or not the machine equivalents of biological systems would have subjective experiences. It is a matter of great debate. But the consequences of opposing views aren't equal. If “machine brains” have minds, then the amount of potential suffering that could be caused is almost limitless.

I have barely started to cover this vast subject. But, I think a debate on the moral implications of neural simulations has to start, and considering the exponentially increasing power of systems on which simulations can be run, this debate has to start soon.

Steve Zara


No comments: