Brain image extraction: Is it metaphysically significant?


Brain image extraction technology has been around for years now: researchers measure brain activity patterns and are then able to translate these measurements into an approximation of the imagery the subject is either seeing or imagining. This way, one can 'read your mind' or 'extract images' from your brain, so to speak: one can make inferences about your first-person visual experience based purely on objective brain activity measurements.

A new study in Russia on brain image extraction may again—understandably, but nonetheless regrettably—lead lay people to the following conjecture: if we are able to translate brain activity measurements into the visual imagery the person is actually experiencing from a first-person perspective, doesn't that mean we have bridged the explanatory gap? Philosophers have maintained for decades now that we cannot deduce the qualities of experience from objective measurements. There is an 'explanatory gap' between these two domains, in that we can't explain qualities in terms of quantities. But if—as shown in the Russian study—technology can translate EEG measurements into visual imagery, surely we have eliminated the gap; haven't we?

Surely we haven't. The conjecture—understandable and forgivable as it may be—is totally wrong; it is based on a deep misunderstanding of what is going on here. This is what I shall attempt to explain in this post.

But before we start, let me clarify first that I won't be judging the quality or accuracy of the Russian study, as reported in this preprint. I will simply assume that it is accurate, as reported. Even if this particular study turns out to be flawed—which I have no reason to believe—something along the same lines is or will surely be possible. In addition, the general public summary prepared by the Moscow Institute of Physics and Technology is quite accurate, level-headed and well written. The popular science media in the West—with some honorable exceptions—could learn a thing or two from them on how to communicate science in an accessible but non-hysterical and non-misleading manner. So you don't really need to read the full technical paper to follow this post; the popular summary will do.

The first thing the researchers did was to train an artificial neural network (ANN) to link certain patterns of brain activity, as measured with an EEG, to certain images. This sounds complicated but it really isn't. All they needed to do was to take EEG readings of a subject as he or she was looking at a known set of images displayed on a screen. Researchers then knew, by construction, what brain activity pattern corresponded to each image, since the subject was actually looking at the image as his or her brain activity was being measured. Next, the researchers provided each EEG measurement as input to the ANN and trained it to produce the corresponding image as output. Again, the latter image was known—it was what the subject was looking at when his or her brain activity was measured—so the trick consists merely in getting the ANN to produce a similar-enough copy of the image. We say that the image is the target output of the ANN during training, which it should produce when given the corresponding EEG data as input.

The ANN's training goes something like this: imagine that the input is just a number—say, 5—and the target output another number—say, 21. What you then want is to configure the ANN such that, when it is given 5 as input, it produces 21 at the output. The function the ANN is configured to perform could be as simple as to multiply the input by 4 and then add 1. In other words, the ANN could simply implement the function f(input) = 4 x input + 1. When the input is 5, we get f(5) = 4 x 5 + 1 = 21. 'Training' the ANN consists in finding this function f(input) through directed trial and error, so the ANN matches the target output. Once it's found, the function constitutes an ad hoc mapping between input and output data. It enriches and processes the input until it adds up to the target output.

In the case of the Russian study, instead of a single number as input, the ANN receives an array of numbers corresponding to each EEG measurement. Instead of a single number as target output, the ANN receives an array of numbers corresponding to the images. And then, instead of just one pair of input / target output, it receives several training pairs—that is, a series of EEG measurements, each with its corresponding image—so the function f(input) generalizes for a variety of inputs. Yet, the essence of what happens during training is what I described in the previous paragraph. The ANN implements an ad hoc mapping between EEG data and target image. It enriches and processes the EEG data until it adds up to the target image.

The figure below, from the Russian paper, illustrates the images the ANN was trained to produce (two upper rows) and the images the ANN actually produced. Notice how training gets the ANN to produce images pretty similar to the target ones.


That the ANN manages to do this is no miracle; it is in fact trivial, the straightforward result of having been trained to do so with actual images. The ANN doesn't magically deduce visual qualities from electrochemical patterns of brain activity; it doesn't bridge the explanatory gap; it already receives images from the researchers to begin with, who knew what the subject was looking at. The ANN outputs images because it was already shown images during its training, so it just learned to copy them when given EEG data as input. That's all. It generates roughly the right images because it has been forced—during training—to find an ad hoc mathematical way to process and enrich EEG data so as to produce certain sequences of numbers that can be visualized, by you and me, as images. As a matter of fact, as far as the ANN is concerned there actually aren't images at all, just sets of numbers that—it so happens—you and I, conscious human beings, can interpret as images.

The next step in the Russian study was to—after training—present the ANN with new EEG patterns that it had not yet seen during training. The idea is to check if the ANN has learned enough to extrapolate from what it has seen and make inferences when it is presented with new inputs—that is, to check if the ad hoc mapping between EEG data and images, produced during training, remains valid for data not used during the training. If the training was effective, the images the ANN will then produce will be similar to the images the subject was actually being shown when the new EEG measurements were taken. If the training was poor, it will produce images that don't correspond to what the subject was experiencing.

In the figure below, also from the Russian paper, we can see how well the ANN managed to infer the new images. The two upper rows show the images the subject was actually looking at when EEG measurements were performed, and the two lower rows show the images the ANN produced in response to these new EEG readings. The match, though still reasonable, isn't as good as that obtained during training, since now the ANN is trying to guess from data it has never before seen.


By explaining how this whole thing works, I hope to have made it clear to you that none of it has anything to do with the explanatory gap or the hard problem of consciousness; the Russian study, in fact, has no new metaphysical relevance. All it establishes is that there are correlations between patterns of brain activity and inner experience, but this we already knew. Such correlations are also entirely consistent with many other metaphysics aside from materialism (e.g. different versions of panpsychism and idealism account for the same correlations; even some versions of dualism do), so it doesn't privilege materialism at all.

The ANN produces images because it was trained with known images to begin with. It succeeds in linking EEG data to images because it was trained on the EEG measurements of subjects who were actually looking at the images. So it merely leverages the fact that the researchers already knew what the subjects were experiencing to begin with. The ANN presupposes the subject's experiences in its training set, it doesn't explain them at all. Do you see the point?

Insofar as it merely assumes the qualities of experience to begin with, brain image extraction technology doesn't explain these qualities. It can't explain that which it presupposes. All it does is to find a mathematical function that links two sets of data (inputs and outputs); it doesn't even begin to explain how qualities can emerge or be produced by quantifiable physical parameters.
Share:

7 comments:

  1. Wow, great analysis. I'd already been skeptical of any metaphysical insights to be gained but it's actually less amazing than even I'd thought.

    Of course this isn't to say it doesn't have great medicinal value.

    ReplyDelete
  2. It will be the day of reckoning if and when the EEG based AI can alter the experience of a physically challenged person, when he/she is given the actual experience of a normal person. I meant the study should enable to give valid experience reversing from reading the EEG to give output

    ReplyDelete
  3. I am a bit reminded of the attempts by materialist particle physicists to explain away the observation effect by substituting a supposedly non-mental mode of "interference," which is then able to obtain a result similar to a watched or consciously-measured experiment. Yes, a machine like an airplane can be made to fly, but not as simply and elegantly as a bird.

    ReplyDelete
  4. Like watching someone smile and determining that they are happy. Certain brain patterns cause the facial muscles to contract. Happiness-brain activity-muscle contraction-muscle contraction observed and interpreted as happiness . blueness-brain activity-ANN process-process output observed and interpreted as blue by observer. The ANN process can be an enhancement over natural limitations of the body to express thoughts, but does not describe how consciousness experiences those thoughts any more than the neuromuscular pathway describes the experience of happiness.

    ReplyDelete
  5. Thanks for the explanation. It was clear to me but I am afraid that's not the majority. Since the experiment, just on the surface, seems to confirm the materialistic view that qualities emerge from brain activity, it is good to set the record straight that it is indeed not the case -it is just another smoke and mirrors game.

    ReplyDelete
  6. I've seen descriptions of other similar research. (Don't have the reference handy.) In one study I read about, the computer created images of numbers that the subject was viewing from brain scans. They were all pretty blurry but a couple were fairly recognizable. The article didn't explain whether the system was actually "trained" to do this must it must have been in some fashion similar to what Dr. Kastrup describes. It is certainly true that physicalists tend to look at such results and declare, "hard problem of consciousness solved!" (But of course it's not.)

    ReplyDelete
  7. So it works almost exactly like the evil Cobra scientist, Dr. Venom's, "Brainwave Scanner" from my old G.I. Joe comics. It's basically just a biofeedback machine.

    It's still pretty amazing, though. Wonder if it could have any applications as a kind of like detector (polygraph tests are sheer pseudoscience), or even some kind of tool to assist with psychotherapy...

    ReplyDelete