The brain as a knot of consciousness

(An improved and updated version of this material has appeared in my book Why Materialism Is Baloney. The version below is kept for legacy purposes.)

A small, natural whirlpool.

As regular readers know, I am an idealist; that is, I subscribe to the notion that reality – despite being solid, continuous, and apparently autonomous – is a projection of mind. I also subscribe to the notion that the brain is a kind of filter of consciousness: It localizes consciousness – which itself is primary, irreducible, and unbound  to the space-time location of the body. I've explored these two notions separately, not only in my books, but in several articles in this blog. So, here, I will not repeat the argument (logical or empirical) for these two notions, but will instead focus on how they can co-exist.

The idea that the brain does not generate consciousness, but instead limits and filters it down, seems to require dualism and contradict idealism. After all, if all reality exists in consciousness, how can the brain – which is a part of reality – filter down that which gives it its very existence? A water filter is not made of water; a coffee filter is not made of coffee; how can a consciousness filter be made of consciousness? It sounds like a self-referential contradiction. Yet, unless this apparent contradiction is resolved, idealism cannot be reconciled with the consciousness filter theory of the brain. Below, I will argue that, although this is a self-referential problem, it does not imply a contradiction.

The first step in resolving this apparent conflict is to emphasize that the word "filter" is used metaphorically here. What is meant is an image in consciousness of a process by means of which consciousness limits and localizes its own breath and depth. Since idealism is far less worked out as a philosophy than realism, we do not have an explicit and unambiguous terminology to articulate its ideas. Indeed, for the time being, we're limited to analogies and metaphors. So here is a metaphor to help one at least gain some intuition about how this could take place.

Think of consciousness as a stream. Water can flow along the stream through its entire length; that is, water is not localized in the stream, but traverses it unlimited. Now imagine a small whirlpool in the stream: It has a visible and identifiable existence; one can locate a whirlpool and delineate its boundaries precisely; one can point at it and say "here is a whirlpool!" There seems to be no question about how palpable and concrete the whirlpool seems to be. Moreover, the whirlpool somewhat limits and localizes the flow of water: The water molecules trapped in it can no longer traverse the course of the entire stream unbound, but become locked, swirling around a specific and well-defined location.

Now, there is nothing to the whirlpool but water itself. The whirlpool is just a specific pattern of water movement that reflects a partial localization of that water within the stream. When I talk of the brain being a structure in consciousness that reflects the self-limitation of consciousness, I mean something very analogous to the whirlpool in a water stream. There is nothing to the brain but consciousness, yet it is a concrete, palpable reflection of the localization of that consciousness. You can point at it and say "here is a brain!"

Let us try another analogy to deepen our intuition of this. Think of the brain as a "knot" that consciousness ties on itself. Indeed, a whirlpool is a kind of single-loop knot that water "ties on itself" and thereby restricts its own movement along a single, simple, circular trajectory. A single-loop knot is the smallest there is. Perhaps one could imagine the nervous system of a roundworm (C. elegans), with its 302 neurons, as a single-loop knot of consciousness that is extremely restrictive to awareness; the flow of consciousness in it is trapped into the simplest and smallest trajectory possible. As nervous systems become more complex, the constraints of the filter relax; more loops are added to the knot; complex tangles emerge. Although consciousness is still restricted to the localization system, it has more room to flow through more complex trajectories.

Knots. Source: Wikipedia.

Extrapolating this line of thinking, the broadest nervous system would be one the size of the universe, so the trajectories entailed by the countless loops of its unfathomably complex "knot" would be co-extensive with the degrees of freedom of existence itself, known and unknown. But this amounts to saying that such ultimate nervous system would be the universe (a whirlpool the size of the stream would be the stream), which brings us neatly back to our starting assumption: The ultimate nervous system  as far as the freedom, breath, and depth of consciousness in it – is no nervous system at all. The ultimate breath of consciousness is achieved when it is not limited by the brain that captures and "filters" it down.

To the idealist, everything exists in consciousness. So even a process by which consciousness limits and localizes itself should also produce an image in consciousness. It is thus not only unsurprising, but expected, that an image of the consciousness localization process should exist. And it so happens to be what we call a "brain." The very structure of the brain evokes an image of a complex knot and self-referential loops that somehow capture consciousness in a "closed tangle," as opposed to allowing it to flow freely. Carl Jung intuited this almost one hundred years ago in powerful, poetic words. The passage he wrote, and which I quote below, is from his personal diary, published in 2009 under the tittle "The Red Book" (the quote is from page 321). It takes the form of a conversation between his ego-consciousness ("I") and an autonomous psychic complex from his unconscious ("The Cabiri"):
I: Let me see it, the great knot, all wound round! Truly a masterpiece of inscrutable nature, a wily natural tangle of roots grown through one another! Only Mother Nature, the blind weaver, could work such a tangle! A great snarled ball and a thousand small knots, all artfully tied, intertwined, truly, a human brain! Am I seeing straight? What did you do? You set my brain before me! Did you give me a sword so that its flashing sharpness slices through my brain? What were you thinking of?
The Cabiri: The womb of nature wove the brain, the womb of the earth gave the iron [of the sword]. So the Mother gave you both: entanglement and severing.
I: Mysterious! Do you really want to make me the executioner of my own brain?
The Cabiri: It befits you as the master of the lower nature. Man is entangled in his brain and the sword is also given to him to cut through the entanglement.
I: What is the entanglement you speak of?
The Cabiri: The entanglement is your madness, the sword is the overcoming of madness.
Note: I'd like to thank the participants of the Skeptiko discussion forum for the spirited debates that inspired some of the articulations above.

Our modern madness

Engraving of the eighth print of William Hogarth's A Rake's Progress depicting inmates at Bedlam asylum.
Source: Wikipedia.

A fundamental metaphysical dichotomy for at least the last few hundred years has been the question of whether reality is objective (i.e. realism) or a projection of the mind (i.e. idealism). Here is a brief analysis of how these two lines of thinking emerge:
  • An idealist takes his immediate experience as the starting point, and builds from there. To him or her, perceptions in consciousness are the primary data of reality, requiring no reduction. Everything else are abstractions of a different ontological order, including the concepts of matter, energy, and space-time. In other words, we invent the notions of matter, energy, and space-time to create stories that tell us what is going on, but the primary data of reality are perceptions themselves, not the concepts we attach to them;
  • The realist, on the other hand, takes concepts and abstractions as the starting point, like the notion that the perceptions in his own consciousness are caused by objects in an imagined, autonomous reality outside of his mind, even though he or she has no direct access to that reality. He or she then tries to reconstruct consciousness itself from those abstractions, like the notion that it is a particular arrangement of matter (i.e. a brain) that generates consciousness. Notice that there is a forward and then a backward motion here: First, the realist projects outward the independent reality of an imagined model (i.e. matter, energy, space-time); and then, he or she tries to reconstruct the primary data of experience back from that abstraction!
Clearly, the realist position not only entails a seemingly gratuitous back-and-forth, but also a tortuous attempt to substitute stories for reality. If you can take some distance from the assumptions of our cultural paradigm and read the above clear-headedly and without bias, you may be amazed that realism ever caught on at all, let alone become the reigning worldview of an entire civilization. I find it truly astonishing.A realist may reply to this by questioning how it is possible that a seemingly autonomous and continuous reality, consistently experienced by so many different people, could emerge from voluble, unstable, and different minds. After all, if I park my car in my garage this evening and go to sleep, tomorrow morning my car will (hopefully!) be exactly where I left it, even though I was (apparently) unconscious during the night. The same, indeed, can be said of the rest of reality: We wake up to where the world has gone, apparently without us, since we last fell asleep. It is this continuity and autonomy that motivates the realist to postulate that reality is not dependent on minds, but must exist on its own. This is the origin of the mad game of replacing reality with models and abstractions.

The seeming autonomy and continuity of reality is a challenge for idealism; there just is no question about that. Ignoring it makes the idealist look naive. But it is, by no means, an impossible challenge. In fact, conceiving of reasonable and coherent explanations for how a mind-projected reality can behave in a continuous and seemingly autonomous way is a much simpler challenge than explaining how conscious perception can somehow emerge from unconscious matter. The latter is the problem the realist is left with; a problem we have no solution for today, not even tentatively.

In my books, I have attempted to suggest at least two different ways in which a stable and continuous reality can emerge from voluble minds. In Dreamed up Reality, I consider the possibility that the continuous storyline of reality (including the laws of physics themselves), emerge out of the local interactions between minds much like sand ripples emerge from the local iterations between grains of sand driven by wind. Technically, the idea is that physics is itself a weakly-emergent property of interactions between minds, as discussed here.

In Meaning in Absurdity, I explore the idea that our minds are much broader than the segments we are ordinarily conscious of. As such, reality may be a projection not only of the thoughts and ideas we are aware of, but also of those we are not aware of. In Jungian terminology, the idea is that the unconscious mind is also causally effective in constructing the reality we ordinarily experience. And then, since the unconscious has been empirically observed to entail a collective layer (the collective unconscious), the uniformity and continuity of reality, as far as the experience of different individuals is concerned, could have its origin at this collective layer of mind shared by us all.

The fact that our culture as a whole has adopted the assumption that reality is separate from our minds makes it easy for anyone to adopt the same assumption without looking like a fool. We find ourselves in a cultural context wherein an extraordinary form of self-deception has gained legitimacy. But then again, that we are collectively mad does not make it any less concerning that we are mad. In my view, we need to go back to basics: to our immediate experience of reality, which is all we have. Let us go back to our humanity and at least question the modern madness of replacing reality with concepts and abstractions; of taking the map for the territory. Let us recover what we lost since childhood, both as individuals and as a society. We, somehow, slowly lost contact with the nature-given truths of existence that any aboriginal today would consider us totally crazy for denying. And crazy we may be.

Note: I'd like to thank the participants of the Skeptiko discussion forum for the spirited debates that inspired some of the articulations above.

What am I?

Personal identity as a mask. Image source: Wikipedia.

I was in a train the other day, on my way to the airport for a week-long trip to Asia. Sitting quietly and listening to music, I was completely lost in my thoughts as the train stopped at a major railway station to let some passengers off and take some more in. It was not my destination, so I just lazily contemplated the movement of people going busily about their business on the platform; a veritable sea of hundreds of individuals, each locked, like myself, inside their own thoughts, worries, dreams, and disappointments; each immersed in a mass of other people, rubbing shoulders with others like themselves, and yet each profoundly alone in his or her unique perspective of this show we call existence. How many unfathomable life stories were represented by each of those tiny, insignificant bodies going busily about the station, like bees in a hive? How many novels could be written about their individual dramas? Each of those lives was equivalent, in complexity, richness, nuance, and significance, to my own. Although we were all immersed in the same reality, each one of us was experiencing that reality from a unique point-of-view. It then dawned on me, as my thoughts continued to wander without much systematic discipline, that the collection of experiences entailed by all those different perspectives, taken together, was the apotheosis of knowledge of what reality is all about.

My thoughts still drifting, I remembered a well-known meditation exercise about our sense of a unique identity. It consists of asking yourself who you are and then systematically eliminating every answer you can possibly come up with: Am I my name? No, I could legally change my name tomorrow and still have the same sense of identity. Am I my profession? No, I could have studied something else, or get another job, and still be me. Am I my body? Well, if I lost a limb or had a heart transplant tomorrow I would still have the same sense of identity, so that can't be it either. Am I my genetic code? No, for I could have an identical twin with the same genetic code. Am I my particular life history, as recorded in my brain? Well, wouldn't I still have the same sense of identity if I had made different choices in the past? And so on. The conclusion of this exercise, which I had thoroughly done before, is that our inner sense of self is fundamentally independent of any story we can come up with to dress it up like a mask. As such, it is entirely undifferentiated and identical in every person. Thus, each one of us ultimately has the same exact inner sense of an "I."

All these thoughts came back to me in an instant, as I watched that moving sea of people on the station's platform. And I realized that, ultimately, despite the uniqueness of their life stories, they were all different points-of-view of that one sense of "I." It was as though the same "I" was taking, concurrently, multiple perspectives from within the game of existence, to accumulate as complete a view as possible of it. Each of those little pairs of eyes was like a unique camera connected to the same mainframe computer, the latter trying to derive an integrated answer to the question: What's going on?

In our ordinary lives, we answer questions like "What is going on?" or "What is it?" through observing the system in question from the outside. What is an ant colony? We set up cameras in and around it and observe it from the outside. What is a person? We scan a person's body and take further measurements from the outside. And so on. But when it comes to "What is reality?" there is no outside perspective.

As I have discussed in an earlier article in this blog, I subscribe to the idealist view, recently all but confirmed by physics, that reality and mind are one and the same thing. As such, that single "I" behind the perspectives taken by each of us is Itself reality. So the question "What is reality?" boils down to the much more personal and urgent: What am I? If the one "I" of nature, the wellspring of consciousness within us all, desires to know what It is, It cannot take any perspective outside of Itself in other to find out. It cannot stand outside of Itself and "have a look" any more than you can bite your own teeth, as Alan Watts once brilliantly commented. It has no mirror to look at either, since It is all that exists. It cannot ask someone else, since no one else exists. What can It possibly do to figure out what It is? Only one possibility is left open: To take the perspective of a subset of Itself so It can observe the rest of Itself as if from the "outside;" in other words, to pretend that It is less than Itself. There is really no other way around it; think of it for a moment before you read on.

And so the myriad dramas of human existence are born; each one a unique, amnesia-suffering perspective of the "I" looking at other parts of Itself in order to somehow try and figure out what It is. Though these are theoretical, philosophical considerations, they came to life at that railway station; not as mere theory, but as a felt experience that made sense and gave meaning to existence.

Consciousness and memory

(An improved and updated version of this material has appeared in my book Why Materialism Is Baloney. The version below is kept for legacy purposes.)

Functional MRI brain image, showing areas of activation. Source: Wikipedia.

Often, when we wake up in the morning after a night of deep and uninterrupted sleep, a dark void of nothingness seems to stretch back to our last moments of awareness before falling asleep. Therefore, we very naturally conclude that we had absolutely no conscious experiences during the preceding few hours. Similarly, upon returning from general anesthesia, we hold on to the comforting notion of having had no subjective experiences during the course of whatever medical procedure was performed on us. Yet, strictly speaking, all we can assert from our state of mind upon waking up is that we cannot recall any experiences during the preceding hours; not that experiences were absent. Unlikely as this may sound, for all we know we may have had spectacular dreams, lifetimes of rich perception and insight, but simply not remember a thing.

It is impossible for us to distinguish between the absence of a memory and the absence of a past experience. Indeed, states we ordinarily associate with unconsciousness are known to entail rich inner life. Our nightly dreams are just the most obvious example, but the list contains some very counter-intuitive ones. For instance, fainting caused by asphyxiation or other restrictions of blood flow to the brain is known to sometimes induce intense hallucinations and even full-blown mystical experiences of oneness and non-duality. The highly dangerous 'fainting game' played mainly by teenagers worldwide is an attempt to induce such experiences through strangulation, often at the risk of death. Erotic asphyxiation is a similar game played in combination with sexual intercourse.  The effect has been described as 'a lucid, semi-hallucinogenic state [which,] combined with orgasm, [is said to be] no less powerful than cocaine.' (George D. Shuman, 'Last Breath: A Sherry Moore Novel,' Simon & Schuster, 2007, p. 80) The technique of Holotropic Breathwork uses hyperventilation to achieve a similar effect: It constricts blood vessels in the brain, causing hypoxia. This, in turn, reportedly leads to significant transpersonal and mystical experiences.

Psychedelic substances have been known to induce similarly profound hallucinatory and mystical experiences. It has always been assumed that they do so by exciting the parts of the brain correlated to such experiences, thereby causing them. Yet, a very recent and as-of-yet unpublished study has shown that at least one particular psychedelic, psilocybin (the active component of magic mushrooms), actually does the opposite: It dampens the activity of key brain regions. Study leader Professor David Nutt: 'Our aim was to identify the precise areas inside the brain where the drug is active. We thought when we started that psilocybin would activate different parts of the brain. But we haven't found any activation anywhere. All we have found are reductions in blood flow.' Study volunteer Dr. Michael Mosley continued: 'A fall in blood flow suggests that brain activity has reduced. The areas affected were those parts of the brain that tell us who we are, where we are and what we are. When these areas were dampened down, I was no longer locked into my everyday constraints.' (see article published here) It seems that psychedelics too, like hypoxia, induce profound experiences through a deactivation of certain brain mechanisms.

If this idea is consistent, we should be able to take it a step further: Brain damage, through deactivating certain parts of the brain, should also induce profound mystical experiences under the right circumstances. Sure enough, this has been reported over and over again. Here are just two prominent examples: Dr. Jill Bolte Taylor's talk 'Stroke of Insight' and a more scientific study by Cosimo Urgesi et al. called 'The Spiritual Brain: Selective Cortical Lesions Modulate Human Self-Transcendence' (Neuron 65, February 11, 2010, pp. 309-319).

What one would normally expect to be associated to sexual arousal, intense hallucinations, or profound mystical experiences is precisely a hyper-activation of the brain; just the opposite of the dampening or deactivation actually observed. If the brain is the sole cause and generator of subjective experiences, how can a deactivation of the brain lead to the most intense experiences human beings have ever had? This very fundamentally contradicts the current paradigm of how conscious experience is produced. Yet, the growing abundance of evidence in this regard can no longer be ignored. Here is a quote from Dr. Pim van Lommel, a Dutch physician: "[D]uring stimulation [of the brain] with higher energy [electromagnetic fields], inhibition of local cortical functions occurs by extinction of their electrical and magnetic fields (personal communication Dr. Olaf Blanke, neurologist, Laboratory for Presurgical Epilepsy Evaluation and Functional Brain Mapping Laboratory, Department of Neurology, University Hospital of Geneva, Switzerland). Blanke recently described a patient with induced OBE [Out of Body Experience] by inhibition of cortical activity caused by more intense external electrical stimulation of neuronal networks in the gyrus angularis in a patient with epilepsy." (My italics) Interestingly, the angular gyrus is precisely one of the brain regions that the researchers behind the 2010 Neuron study, mentioned above, identified to be associated to feelings of self-transcendence when damaged during surgery. A tantalizing picture seems to be beginning to emerge in neuroscience.

An alternative model that explains all this, and remains entirely consistent with all neuroscience data yet produced, is the 'Mind at Large' hypothesis popularized by Aldous Huxley in the wake of his own psychedelic experiences. The origins of the hypothesis itself lie in the work of French philosopher Henri Bergson, particularly in chapter one of his book 'Matter and Memory,' and were then refined by Cambridge philosopher C. D. Broad. The idea consists in the following: The nervous system does not generate subjective experience, but rather filters it down. Conscious perception is thought to be a fundamental property of nature, irreducible to matter (a very valid idea to this day, in light of the 'hard problem of consciousness'). This way, every conscious entity is, in principle, capable of instantly experiencing all universal phenomena and truths, across space and time. The role of the nervous system is to restrict our perceptions to the space-time locus of the body, so to facilitate its survival. Our sense organs do not produce perceptions; they simply allow in perceptions that already exist in consciousness anyway, but which otherwise would be filtered out of awareness by the brain. Since the brain is thus a filtering mechanism that frames perception, interfering with its function through drugs or other methods naturally modulates our conscious experience to the extent that it disrupts the filtering process. It is thus no surprise at all that, as neuroscience has observed, states of the brain correlate well with subjective states of mind. But unlike current neuroscience, the 'Mind at Large' hypothesis can explain hypoxia- or brain-damage-induced experiences in a very direct and intuitive way: By deactivating parts of the filtering mechanisms of the brain, hypoxia and brain damage release consciousness from the grip of those filters and allow it to experience a broader reality.

If this hypothesis is true, as mounting evidence seems to suggest, we may have to revise our idea of 'unconsciousness.' Consciousness may never be absent. What we refer to as 'periods of unconsciousness' – be them related to sleep, general anaesthesia, or fainting  may need to be re-interpreted as periods in which memory formation is impaired. The very disruption of brain mechanisms induced by certain drugs or spiritual techniques may also impair our ability to construct coherent memories. Therefore, whatever unfathomable universes our consciousness may wander into during periods in which we are, from the point of view of an external observer, apparently unconscious, we may hardly be able to remember our experiences upon returning to an ordinary, analytical state of mind. Think of how elusive dreams can be: At the moment you wake up, you may still remember an early-morning dream; five seconds later, you already forgot it, but still remember that you had a dream; by the time you stand on your feet, you can't even remember that you dreamed at all. Anyone familiar with so-called 'mystical' experiences will know that similar elusiveness often applies to those unitary, non-dual states of mind as well. Memory formation seems to be highly dependent on our ability to conceptualize our experiences by framing them into an explanatory framework; that is, by telling ourselves in words what it is that we are experiencing. Only then do we remember things with clarity later on. Yet, when consciousness is partially released from the grip of the brain's filters, our ability to conceptualize our experiences is much reduced, be it because of cortical impairment or because of the trans-linguistic nature of the experiences themselves. Either way, the net effect is the later impression that nothing at all has happened! How much of our lives do we forget in this way?

Contemplating the unfathomably broader reality perceivable upon a partial or temporary release from the brain's filters, and then being able to remember enough of it to articulate something to others, may require a very delicate balance: Enough deactivation of the right brain mechanisms should be achieved, but sufficient function in the right cortical regions should be preserved to allow conceptualization and memory formation. Staying on this thin borderline is difficult; drift too much to one side, and one experiences nothing; drift too much to the other side, and one remembers nothing (or, worse, one doesn't return). The elusiveness of such balance may explain why our culture has been locked into a naive and myopic materialistic paradigm for so long. Unlike so-called 'primitive' societies, we no longer subject our bodies to the physical extremes – like strenuous effort, malnutrition, weather exposure, isolation, or untreated chronic illness  which would have impaired brain function just enough to give us a glimpse into the 'other side.' We no longer subject our youth to the ordeals and rites of initiation that would have reminded them of the true nature of reality. So the longer and more comfortably we live, the more we forget that there is something else.

Don't get me wrong: That we live in better health, comfort, and have become enlightened enough to spare our children unnecessary stress are, certainly, good things. But a side-effect of it is that we've firmly planted ourselves in one side of the divide between life and death; between filtered and unfiltered consciousness. We have lost access to the 'otherworld.' We have lost our ancient ability to occasionally cross the boundary and come back to tell the tribe our tale. Indeed, for most of us, the only time we step on that elusive borderline are our short departing moments from this Earth. Then, for the first and last time since early childhood, we may see reality unfiltered, and yet retain enough brain function to remember and articulate what we see. But the window of time to communicate anything coherent is too short for our utterances to be taken seriously. Moreover, once standing on that promontory of freedom at last, we are probably too overwhelmed by vast vistas to even bother trying to speak. The only possible reaction to the experience might indeed be a total surrender to awe, as Steve Jobs so simply, yet powerfully, captured in his now famous last words: 'OH WOW. OH WOW. OH WOW.'

Could anyone have said anything more appropriate?

Important disclaimer: Any activity that (temporarily) impairs brain function, be it through intoxication, hypoxia, or any other stress-inducing mechanism, is potentially dangerous and even life-threatening. In my view, some  like chocking or ordeals – should simply never be attempted by anyone. Even apparently harmless techniques, like meditation, can have negative side-effects like mild dissociation and relaxation-induced anxiety. Extensive research, judicious preparation, and preferably professional supervision should precede and accompany the use of any of these techniques. Conservatism is advisable. Some techniques, like the use of psychedelics, are unlawful in most jurisdictions and carry severe penalties.

The problem with fundamentalist atheism

William Blake's image of the 'Inferno,' Canto XII,12-28, The Minotaur XII. Source: Wikipedia.

As someone with a fairly rationalistic, skeptical, and even scientistic background (though slowly improving), I confess to have some degree of sympathy for parts of the message coming from fundamentalist atheists like Richard Dawkins, Christopher Hitchens, and Daniel Dennett. To the extent that they promote critical thinking, I believe they make a positive contribution to society. But there may be a significant way in which they fall victim to the very criticism they pass onto others. Allow me to share a thought on this.

Of the three, Daniel Dennett has most of my respect, for the very cogent, clear, and even fun way in which he argues his case. I find it delightful to watch his eloquence in action, and I have sincerely learned much from him. He has, unquestionably, an analytical intellect of enormous power; there lying his key weakness, in my view. Indeed, I believe Dennett is wrong regarding just about every important point he has argued, from the nature of consciousness to the validity of so-called spiritual experience. Yet, because of his eloquence, he has forced me to think very hard to try and make explicit for myself just in what way he is wrong. And I’ve come to conclude that not only him, but many fundamentalist atheists, are wrong in a very similar way: They implicitly define the rules of a private game, assume everyone is playing that same game according to those same rules, and then explain with much bravado how they have won the game. The problem is that others weren’t aware they were supposed to be playing that game, and didn’t even know the rules. Fundamentalist atheists themselves, I believe, are not cognizant that this is what is actually happening, so their passion truly is sincere. If only Carl Jung were alive, he would have had a field day with this.

Here is the key rule of the game implicitly played by fundamentalist atheists: All truths can be captured fully and accurately by literal constructs of language. In other words, everything that is, has been, or will ever be true about reality can be stated in literal words with absolute accuracy. This is a severe and entirely artificial expectation; they forgot to tell Ms. Language that she was supposed to be able to do so much.

You see, the motivation for the invention and continuing evolution of language was, and continues largely to be, to facilitate the practicalities of life. Language helps us get things done in the outside, empirical world, since the coordination of activities in that extravert landscape is what required the invention of language to begin with. Our inner landscapes – the subjective world of personal experience, like inspiration and insight – did not demand the clarity and accuracy of description that practical tasks did, so language did not evolve to capture those to the same extent. Moreover, language depends on shared experiences to ground the meaning of its associated dictionary: If you say ‘table’ I know what you mean because I myself have seen a table before, so we have this shared experience to ground the meaning of the word ‘table.’ But more private, inner experiences cannot be shared in as straight-forward a manner as observing a table together. Therefore, again, one would be very naïve to expect that language can unambiguously, accurately, and completely capture the reality of inner experience and perception. This is why poets and mystics throughout history have refined the art of using metaphor, allegory, simile, analogy, to “force the horse of language onto a ladder,” as Terence McKenna once beautifully put it.

Inner experiences are as much a part of reality as any other experiences. Indeed, it can be cogently argued that all experiences are subjective in nature and, therefore, inner experiences. At a time when physics has all but demonstrated that objectivity is an illusion, denying that much of reality cannot be captured in literal language is a reflection of naiveté and, perhaps, unconscious hubris. The key rule – the fundamental assumption – behind the private game of fundamentalist atheists is, therefore, false.

Let us use a metaphor (!) to get this point across. The hypothesis here is that the elusive truths of existence are like solid objects, while language representations are limited to shadows of these objects. Describing the shadows is enough for us to resolve the practicalities of life. However, it is obviously impossible to literally describe all qualities of a solid object through the contours of its shadow. The best one can do is to attempt to convey an intuition of its real shape through the creative use of shadows, even shadows of other objects easier to visualize. This is what language metaphors are. The result is a partial and roundabout description that cannot be interpreted literally, nor construed to capture the truth fully. One is always ultimately responsible for 'reconstructing' the solid object of truth inside one's own mind, based on language hints coming from others.

Fundamentalist atheists attack the reports of non-materialist experiences (I could have used the more vague expression ‘spiritual experiences’) by pointing out all the ways in which what is described cannot be literally true. Duh. They are attacking shadows, revealing their own inability to think more broadly and 'reconstruct' the underlying message one is attempting to hint at. They a priori exclude from their mental landscapes the truths of a Blake, or a Keats; a tremendous loss. And yet, they believe they are in a position of superior understanding. The irony couldn’t have been more cruel.

There is a significant way in which fundamentalist atheists may be unconsciously attributing to others their own cognitive limitations. In psychological terms, this is called a projection. By passing judgment onto their own projections according to the rules of their own private games, they reveal parts of their psychological makeup but assert nothing of relevance about the nature of reality.

Expressionist absurdity

Rehe im Walde, 1914, by Franz Marc. Image source: Wikipedia.

In Rationalist Spirituality I suggest that a possible answer to the perennial question of the meaning of existence is that physical reality is a kind of expressionist artwork: a device or allegory whose aim is to evoke certain subjective states – emotions and ideas – for the sake of experience and insight. A peculiar characteristic of physical reality, as an expressionist allegory, is that we all experience seemingly the same allegory from slightly different points of view. As I discuss in Dreamed up Reality, what guarantees this consistency of experience across subjects are the laws of physics and logic that give reality its continuity, self-consistency, and predictability. Thanks to this consistency, reality provides us with a common playground of shared experiences, instead of isolating each one of us in a unique, idiosyncratic universe of private reveries that would forever prevent us from communicating meaningfully with each other. So the laws of physics and, more fundamentally, those of logic are enablers of this common playing field of shared experiences we call reality.

However, physics and logic have the ‘side-effect’ of limiting the degrees of freedom available for evoking the strongest and most meaningful emotions and ideas. In expressionism, the artist parts with these limitations: expressionist art often seems to defy physics and logic; its use of vaguely realistic symbols and images go only as far as the artist considers it useful for evoking certain subjective states. Beyond that, the artist will freely depart from realism and go deep into the land of absurdity to achieve his or her expressionist goals. An example of this is Edvard Munch’s ‘The Scream’ (1893), whose utterly absurd appearance has a powerful and obvious evocative effect in most people.

In my upcoming book Meaning in Absurdity I argue the hypothesis that, below its more superficial layer available to ordinary experience, reality is indeed fundamentally illogical and absurd. One can then fantasize about a cosmological future when reality will manifest higher degrees of absurdity for the evocation of deeper subjective states, while somehow still preserving the consistency of experience that enables us to share and evolve together in a common allegory. I touch briefly on these thoughts in my TEDx talk of last MayTo complement that talk, and to illustrate what I mean when I say that absurdity has higher evocative powers than logic, I want to share with you a highly illogical dream I had earlier this year.

In the dream, I was back to a small coastal village where I used to spend weekends as a kid. It was a very quiet, sedate village with narrow streets and people going about their business without hurry. I was walking on a sidewalk and needed to cross one of the streets in order to get to where I needed to go. But as I turned to cross, I realized that the very narrow street was, concurrently with being a narrow street, also an immensely broad channel of churning, stormy seawater where huge waves crashed and deadly currents lay hidden beneath the surface. Naturally, this was profoundly illogical on the face of it but, in the dream, with my pre-frontal cortex partially deactivated, this did not stop me from allowing the contradiction to be completely real to my experience.

The cognitive dissonance in my mind was palpable. While the reality of the narrow street made the crossing very tempting, due to the tantalizing proximity of the other side, the concurrent reality of the wide water inferno was a deterrent to any attempt at crossing it: I would most likely be swallowed up by the huge waves or dragged under by the currents.

As I stood there contemplating this impossible dilemma, I suddenly saw a group of classical ballerinas, dressed in full attire, run on the other side of the street towards the water. They ran in a line and, as the first one approached the edge of the sidewalk, I thought to myself: ‘If she jumps in she will be as good as dead, for there is just no way such a small, delicate creature can survive this raging water inferno.’ But with no hesitation whatsoever, she jumped in, immediately followed by the next one, and the next, until all of them had jumped, imbued with an incomprehensible confidence in their actions. In the reality of the narrow street, this all took place just a few meters away from me, so I could witness, in horror, every detail of their suicidal behavior. My immediate thought was: ‘Damn it, now I have to jump in and rescue them, otherwise I will have a few casualties in my hands!’ And in I went…

Once in the water, my worst expectations were fully confirmed: despite being a relatively good swimmer, I could barely hold my head above the water; the force of the currents was incredible, the waves gigantic, and I thought I had just made the last mistake of my life. Now the ballerinas seemed to be very far away, all the way across this very broad channel of churning seawater. With difficulty, I kept track of their position so I knew where to swim to, but the effort was exhaustive.

And then I made a striking observation: beyond all my expectations and common-sense, the ballerinas seemed to be in no trouble at all. Somehow, they were timing their movements in such a way that they swam effortlessly along with the flow of water, not against it; yet they were going precisely in the direction they wanted to go. Their movements were unfathomably gracious, delicate, and effortless, as though they were gliding through, propelled by the water itself. I was in awe. This phenomenal display was akin to a dance of two partners in perfect synchrony: a ballerina and the ocean she was immersed in, ‘flowing around one another’ like a couple dancing the tango.

Yet I was still in trouble, straining every muscle of my body to stay afloat. It then occurred to me that I could try to imitate the movements and timing of the ballerinas' swimming style. And it worked. The more I observed and tried to imitate them, the better I got at it. Soon it became second nature to me and I was gliding along effortlessly just like they were. I was literally ‘in the flow,’ a state where I tried to exert no conscious control of the situation and, instead, simply allowed myself to move by my newly-acquired instinct. The water had become my partner, not my enemy. Ultimately, the otherwise scary and threatening situation turned into a very pleasurable and rewarding dance with what is. I was lost in it, in bliss, until I eventually woke up.

The absurdity of the dream is self-evident. Not only did it defy physics and common-sense, it defied bivalent logic itself – the core of our rationality. Yet, precisely because of it, the dream evoked a level of subjective feeling and understanding that would have been impossible to convey with an otherwise logical, coherent scenario. For personal reasons I do not want to touch upon here, the lesson it contained was the thing I most needed to grok at that point of my life. And because of the absurdity of the way in which this lesson was delivered – exploring degrees of evocative freedom unavailable in a logical reality – I not only understood it intellectually, but felt it in every bone of my body. Obviously, the lesson was this: Go with the flow; don’t try to control the world. And it was a lesson for life; an example of what we miss because of our civilization’s insistence in dismissing all translogical realities.

Evolution, intelligent design, and other myths

(An improved and updated version of this essay has appeared in my book Brief Peeks Beyond. The version below is kept for legacy purposes.)

Structure of DNA. Image source: Wikipedia.

When I think of, and talk about, the big questions of science and spirituality, I do not adopt the notion of a supernatural being separate from nature as the external ruler of reality; somehow the form of such notion doesn't resonate with my intuitions. However, I am indeed sympathetic to the possibility that there may be intelligence and awareness intrinsic to nature. In other words, that as our knowledge of nature advances we may find a natural intelligence and awareness – not just mechanical laws – woven into the very fabric of reality at multiple levels. As I sought to elaborate on in Chapter 6 of Rationalist Spirituality, our science today is very far from showing that it has uncovered all causal influences determining the observable phenomena of nature. The notion that it did simply reflects a pervasive but ultimately unjustified extrapolation grounded purely on subjective values (a paradigm), not on empirical evidence (I discuss this in another article in this blog). Therefore, there is indeed plenty of room for such a causally-effective, underlying intelligence in the phenomenology of nature we observe every day.

Cut to the raging debate between evolution and intelligent design. I confess to have largely ignored this debate until very recently, and to be still largely ignorant of the fine points of the argument (which I will not touch upon in this article, opting to remain conservatively agnostic of them). The reason was – I confess – a pre-conception: I have always thought of intelligent design as creationism. In other words, I equated intelligent design with the notion of a supernatural being standing outside of nature and designing it like an architect designs a building. As I explained above, I have always had, and still have, a strong tendency to reject this story as simplistic, logically inconsistent, and somewhat arbitrary.

Then, a couple of weeks back, I wrote an article on this blog that seems to have been identified by both the materialistic and the religious sides of the debate as bearing relevance to it. That motivated me to have a brief look at what intelligent design actually is. In the corresponding page at Wikipedia, intelligent design is defined as ‘the proposition that certain features of … living things are best explained by an intelligent cause, not an undirected process.’ The article goes further to state that intelligent design ‘deliberately avoids specifying the nature or identity of the intelligent designer.’ Ignoring for now the possibility that there might be – as some claim – political or religious agendas and biases behind either side of the argument, the statements quoted above, taken on face value, seem quite reasonable in light of the discussion in the first paragraph of this article. If there is an underlying intelligence woven in the very fabric of nature and causally contributing to its phenomenology, then this underlying intelligence is consistent with the definition of the ‘designer.’

It is important for me now to state very clearly and explicitly what I am saying and what I am not saying regarding the debate between evolution and intelligent design. Within the context of our current scientific paradigm, and ignoring a potential contradiction built into it, evolution by natural selection is, in my view, an overwhelmingly established model for how we’ve got here as living beings. The evidence for it is, in my view, very clear and solid. Indeed – and remembering that I, as a Jungian, like to call all human models of reality ‘myths’ – evolution by natural selection is one of our best myths.

Now, by stating the above, what I am saying is that the notion that organisms evolve over time through environment-selected mutations is a well-substantiated one. But there is one key aspect behind the modern notion of evolution that I consider non-provable and inelegant: the idea that those mutations are always random. In other words, that the changes in the DNA of organisms that later get selected for are, at origin, purely the result of blind chance.

There is a sense in which saying that something is random is saying nothing at all. In this sense, appealing to randomness is a precarious attempt to conceal our lack of understanding of what is really going on. Indeed, randomness is defined as something that cannot be predicted. As such, it is a human-centered abstraction; a label for our inability to find any coherent pattern in a set of data. And even as an abstraction, randomness is a difficult one: tests for randomness in information theory are notoriously tricky and often unreliable since, theoretically, there is always a chance to find any pattern in random data; somewhat of a contradiction.

Going further, all evidence for evolution by natural selection, as the name indicates, is evidence either for the idea that organisms change over time (i.e. evolution) or for the notion that genetic mutations are selected for based on survival fitness (i.e. natural selection). Both can be entirely correct regardless of whether the genetic mutations are random at origin or the manifestation of an intelligent pattern. Even if genetic mutations in the past were not the result of blind chance, there would still be – as there is – evidence that these non-random mutations were selected for based on survival fitness and, thereby, led to genomic evolution.

Strictly speaking, there isn’t any solid empirical evidence that genetic mutations in the past have had a purely random origin. Evidence for that would require statistical data of a magnitude way beyond anything that could be realistically expected from the fossil record. And even then, the statistical checks for randomness, tricky as they are, would have questionable validity since this data would not have been collected under sufficiently controlled conditions. In my view, it is simply impossible to state that all genetic mutations at the basis of evolution by natural selection have had blind chance as their sole causal agency. Stating it is merely a somewhat arbitrary necessity of the current scientific paradigm – i.e. a set of subjective values and beliefs – but cannot be firmly grounded on empirical evidence.

The hypothesis here is not that a superior intelligence already knows exactly what all organisms should look like, or even that it already knows how to get there. Were this to be so, evolution would be unnecessary: This intelligence could simply manipulate all DNA, in one direct step, into its final desired state. Clearly, empirical evidence contradicts this. So the hypothesis here is that, instead, the underlying intelligence we postulated above is rather experimenting in the laboratory of nature. It may be iteratively seeking an 'optimization' of DNA according to an unknown, but subjective and intentional telos. At each iteration, it may 'observe' the resulting outcome and refine its next attempt through a shift in the balance of genetic mutation probabilities. Evolution by natural selection in the theatre of nature may be its feedback mechanism; its necessary tool in the realization of its intentionality.

To the extent that an appeal to randomness reflects simply our lack of understanding of the causal forces behind genetic mutations, it leaves room for this kind of underlying intelligence. And that, interestingly enough, is not in contradiction with evolution by natural selection; on the contrary: it may be the driving engine behind the variety of all living organisms. Moreover, this hypothesis remains entirely consistent – I dare claim – with all scientific evidence relevant to the debate.


American Progress, by John Gast, circa 1872. Source: Wikipedia.

Many of my articles in this blog have a common theme: they attempt to throw doubt on aspects of our worldview that we normally take for granted. There is thus a way in which these articles can be taken as negative: Instead of offering new explanations, they seem to solely undermine existing explanations. If they are correct, a reader may throw his arms up and say “right, you’ve got a point there; but then, how do we make progress?” Somehow, we expect progress to be made only when new explanations are offered.

But new explanations can only emerge if we are able to put our old explanations in perspective and look upon them from ‘outside the system,’ as Douglas Hofstadter likes to put it. In this context, a big part of making progress is the dismantling of notions and beliefs that prevent us from seeing alternative, and more promising, paths forward. When our paradigms become rusty, they work as barriers to progress; they lock us into repetitive and exhausted patterns of thought. Like horses with blinds, we become unable to contemplate the landscape and become single-minded. This is where, I believe, throwing paradigmatic assumptions into doubt does contribute to progress. We need to be critical of our stories while being immersed in these stories; a very difficult thing to do. The human creature has an innate tendency to hold on to stories that have proven useful in the past, and then to extrapolate these stories way beyond the point where it is empirically justifiable. Keeping this tendency under control requires active and critical effort, and doesn’t come on its own. The belief that our current epistemology is entirely justified by empirical observations is one of the most pervasive fables of our time, as I discussed in an earlier article.

So keeping our culture’s current stories in perspective by pointing out the ways in which we do not know them to be true is, in my view, a legitimate and constructive thing to do. In my books, I do attempt to offer new explanations and models because I know our minds cannot tolerate the vacuum left when old stories are dismantled; after all, we live in a world of myths, as Carl Jung so correctly pointed out. But I do not offer these explanations as the final truths, or even as ‘theories;’ I offer them just as, hopefully, intriguing and well-articulated hypotheses; as food-for-thought, if you will.

In talking about ‘progress’ we need to ask ourselves what one means when one speaks of it. The notion of progress, as we normally understand it in Western culture, is historically a positivist one. According to it, progress is about developing technology, infrastructure, and social order; in other words, objective things in the world ‘out there.’ But that seems to be but one of many possible translations of what progress means to us intuitively. Indeed, if you think about it carefully enough, you may find that, at the end of the day, we human beings truly progress if, and only if, we somehow feel better during the course of our lives. Feeling better is the goal; the rest are just means to an end. This way, we progress if the total amount of a certain subjective ‘substance’ we accumulate in our lives – a substance sometimes called ‘happiness,’ other times ‘well-being,’ and even ‘inner peace’ – increases. Any other definition of progress is indirect: Why do we want better technology? To feel better. Why do we want better urban infrastructure? To feel better. Why do we want more stable, fair, and well-functioning societies? To feel better. And anything that makes us feel worse ought not to be called progress.

The problem here is this: We have no objective way to measure the elusive substance of happiness. Therefore, we tend to translate it into something we can measure, like how fast our computers are, how quickly we can get to work, or how high our bank balance is. But the translation, as many of us discover once we've passed 35 years of age or so, is often wrong. Today we live longer lives than ever before in the known history of our civilization; but are we happier than our ancestors? We have access to technologies beyond the imagination of aboriginal cultures around the world; but are we less anxious than they are? We consume hundreds (if not thousands) of times more resources than a poor villager in Bangladesh; but do we have more inner peace?

If the stories we tell ourselves in our culture are at the basis of our anxiety and hopelessness, but they are true, then we have no alternative but to manage the situation as best as we can. If nature is truly meaningless, our existence an accident of probabilities, consciousness merely a side-effect of the synchronized dance of atoms, and the future decided solely by the heartless throw of the quantum dice, then let us face it and put aside some money for the analyst’s bills. But are we sure these stories are really true? No, we’re not. These are the fatalistic extrapolations of people who are so immersed in the paradigm that they are unable to pay broad enough attention to what is happening all around in science and also outside of science: in the real world of human experience – the only carrier of reality anyone can ever know. These inductive inferences have not been demonstrated on the basis of empirical observations; they are justified merely by a subjective set of abstract values and beliefs.

If progress means, ultimately, to find our way to true inner peace and well-being, showing the fatalistic artifacts and extrapolations of our current paradigm for what they are surely has its place in our progress as a culture. One of the most pervasive maladies of our times is the illusion of knowledge; the strong inner belief that we’ve figured it all out and it’s all pretty much meaningless. We don’t know that.

A thought experiment about evolution

(An improved and updated version of this material has appeared in my book Why Materialism Is Baloney. The version below is kept for legacy purposes.)

Do we see the world through distortive glasses? Image source: Wikipedia.

I want to invite you today for a thought experiment. Let us suppose that the key tenets of our scientific, material-reductionist paradigm are all correct. According to this worldview, reality is objective and independent of mind; mind and its conscious perceptions are a by-product of the matter of the brain; and the brain, along with our ability to understand nature, has evolved through natural selection favoring survival of the fittest. Still according to this worldview, life has evolved within a space-time fabric where the interplay of matter and energy gives rise to the set of objective phenomena we call reality. Let us imagine this reality as a collection of objects in the canvas of space-time.

As the first living organisms evolved, they were immersed in the same space-time canvas populated by all the other objects that make up reality: rocks, water, sand, air, other living beings, etc. They also had perceptual mechanisms that gave them indirect access to these other objects: for instance, eyes that allowed them to form internal, subjective images of the objects populating the reality they were immersed in. The game of life consisted in optimizing one’s behavior in the dynamics of all those objects so to increase one’s chances of surviving and reproducing. Now note that, still according to the current scientific paradigm, because a living being only has access to its own internal images – not to the objects populating reality – its choices for implementing its survival strategy are entirely based on those images alone.

The images are constructed according to the architecture of the living being’s nervous system, which is itself, as postulated, a result of evolution through natural selection. An obvious question is thus: What would the optimal mapping between objects and subjective images be so to optimize survival? A mapping between two spaces – the objective space of objects and the mental, subjective space of images – can, mathematically speaking, assume infinite forms. One of these possible forms is the identity mapping: to each object in the space 'out there' corresponds a unique, analogous image in the subjective space 'in here.' Such one-to-one mapping, again, is just one possibility and should not, in principle, be assumed to be the most effective one as far as survival is concerned.

Indeed, many of the objects in the space 'out there' (that is, objective reality) may be irrelevant to survival to the extent that they cannot influence the physical body whose survival is being optimized for. For instance, my own work in the field of artificial neural networks has shown that nervous systems can evolve to advantageously discard the representation of objects whose corresponding images would just increase the amount of 'noise' in the nervous system. Other objects may indeed be relevant to survival in different ways, but mostly according to their relative differences, so that a mapping that altered and distorted their true attributes (like location, behavior, appearance, autonomy, intensity, etc.) so to highlight these relative differences could conceivably favor survival. Again, in another one of my earlier scientific works, it has been very clearly shown that certain artificial nervous systems perform much better when failing to fully or accurately represent the data available to them. Beyond my own work, a wealth of data on pre-processing systems for artificial neural networks shows that one-to-one mappings between objects and subjective images are often not optimal. Artificial nervous systems using these advantageous pre-processing schemes would, thus, ‘see’ a world very, very different from what is actually 'out there.' Their perception of reality would hardly resemble reality, but instead be set up, through evolution, to 'transform' reality and optimize their own chances of survival. In essence, they would live in a hallucinated theater.

You see, evolution would, most certainly, favor mappings between objects (that is, reality) and subjective images (that is, perceptions) that favored survival, whether such mappings would accurately or completely represent reality or not. After all, the variable being optimized for here is not representation accuracy or completeness, but survival.

And now here we are: highly evolved organisms with the unique ability to create scientific models of reality. And yet, we naively make an assumption that our own models seem to render highly suspicious: we assume that what we see, or otherwise perceive, can be accurately mapped one-to-one onto the ‘real reality out there.’ We assume that the subjective images in our minds correspond perfectly to the objects of reality. We assume, thus, that we have complete and undistorted access to that reality. This is a contradiction: there is no reason to believe that our brains would have evolved to represent reality completely and as-is; they would, instead, have evolved to represent it in whatever incomplete or distorted way favored survival the most. Therefore, as evolved creatures, we simply have no way to tell what is really going on. And although our technological instruments do broaden our perception mechanisms beyond what nature has provided us with, they are ultimately also limited to what we can perceive as far as our ability to build them, and perceive their outputs.

So we end up with a profound contradiction: if we are to be consistent with the scientific paradigm, we cannot trust that what we see is what is actually going on; we may, for all we know, be living in an elaborate, brain-constructed hallucination of reality that happens to maximize our chances of survival. However, the very scientific paradigm that tells us this was itself built upon the very assumption that what we perceive corresponds accurately to nature. If that assumption cannot be made, then can we trust the conclusions of our scientific paradigm to begin with?

The subtleties of perception

An autostereogram. Can you see the shark? Image source: Wikipedia.

The other day I was thinking about those old autostereograms: pictures of apparent random dots that, when looked at in just the right way, make a 3-dimensional image jump out at you. I have never been good at that, but the key seems to be to not focus on the dots. It requires a certain ‘way of seeing’ that transcends analytical effort. Indeed, any effort at analyzing the picture ensures that you will not be able to see the 3D image, even though it’s there right under your nose all the time.

Sometimes I wonder if autostereograms aren't excellent metaphors of reality. How much of reality are we capable to see with our regular, highly analytical way of seeing? How much do we miss? How much can there be right under our noses, but which we never see or even intuit in our daily lives? After all, if the metaphor is valid, the more we try – in the sense of making a goal-driven effort – the more difficult it becomes to see. Is there a trick to see more of reality, just like there seem to be tricks to see autostereograms? And if there is, what is the meaning and significance of what we would then perceive?

I have asked myself these questions since my early adolescence. Because I have – or so I believe – a particularly hardened analytical mind, answering these questions to my own satisfaction has always been a difficult – often frustrating – exercise for me. But over the years I have had some successes. I have succeeded in allowing – fleetingly, as it may have been the case – a natural change in my way of seeing through a temporary disruption of the analytical mechanisms that are so much a part of my ordinary perception. What then became clear to me, springing up into my cognitive field as a self-evident and eternal reality, is what is described in my book Dreamed up Reality.

So the only explanation possible is...

(An improved and updated version of this essay has appeared in my book Brief Peeks Beyond. The version below is kept for legacy purposes.)

The Sleep of Reason Produces Monsters (etching by Goya, c. 1799). Source: Wikipedia.

In logic, a strong distinction is made between deductive and inductive inferences. Here is an example of a deductive inference:

Amsterdam is the capital of the Netherlands. Therefore, if I go to Amsterdam, I must go to the Netherlands.

Clearly, a deductive inference is necessarily implied by its premise, beyond any doubt. Now consider the following inductive inference:

My house has been broken into and there are unidentified footprints in the backyard. Therefore, the footprints were left by the burglar.

Now the inference cannot be derived with certainty from the premises. There is only a reasonable probability, given the circumstances, that the footprints were made by the burglar. Indeed, they could conceivably have nothing whatsoever to do with the burglary; they could have been made, for instance, by the gardener who came in to collect some forgotten tools while you were not home.

Inductive inferences are entirely dependent on our ability to correctly evaluate probabilities. However, probabilities are notoriously tricky to evaluate without the benefit of statistics based on past empirical observations of analogous situations. For instance, consider this hypothetical situation:

For the past 10 years, 90% of the times the postman came to my house after I was already awake. Therefore, I inductively infer that on Monday the postman will come after I wake up.

Here the probabilities are easy to estimate based on past empirical observations of analogous situations: 10 years of it, to be precise. These previous, empirical observations of the arrival of the postman form a so-called 'reference class' of earlier occurrences. The probability of the inductive inference can then be calculated based on this reference class (in this case, 90% probability that the inference is correct). But what about cases when no proper reference class is available? For instance:

Vicky returned from clinical death claiming to have seen the doctors working on her body as if she stood outside of it. Therefore, Vicky’s story is a post-event confabulation based on earlier memories.

But wait; how many times have similar stories, told in analogous situations in the past, been known to be confabulations? Here is another:

George saw a luminous object in the sky performing maneuvers impossible for any known aircraft. Therefore, George saw an alien spaceship.

How many times have similar observations in the past been known to be caused by spaceships from another planet? A final example:

The fundamental laws of nature have been the same across space since the Big Bang. Period.

Now, where are the reference classes in these cases? There aren’t any. Our estimate of probabilities here is not based on objective statistics of previous empirical observations. Instead, and this is a key point, it is subjective; it is based solely on our paradigm – a set of subjective values, assumptions, and beliefs that inform us of what should be possible or likely. According to this paradigm, consciousness is a by-product of brain activity, so Vicky could only have confabulated her story. According to this paradigm, we already catalogued every observation that can conceivably be produced by the dynamics of our earthly reality, so George could only have seen a spaceship from another planet. And finally, if the laws of nature were changing over time our entire scientific edifice would be foundationless, so they could only have stayed the same.

In all these cases, the form of the thought is this: 'Since all other alternatives allowed by the paradigm can be discarded, then the only alternative left must be true.' In other words, we extract conclusions by elimination of alternatives. The problem here is that, to infer conclusions by elimination, we must know the boundaries of reality. In other words, we must assume that our paradigm is complete; that there is no yet-unknown aspect or facet of reality lying outside our current paradigm. This is a supremely arrogant, naïve, and dangerous assumption on the face of it; one that history shows to be more-than-likely wrong (for this latter inference we do have a solid reference class!). You see, we don’t know what consciousness is or where it comes from, so discounting that it can exist independent of brain activity is precipitated at best. We don’t know all the parameters and dynamics of our earthly reality, so postulating a non-earthly agency to explain certain bizarre observations is hastened. And finally, we just cannot know whether the laws of physics have been the same since the Big Bang; yes, we have models based on this assumption that seem to explain reality, but that’s inverting the argument: these models were built so that they would make sense of the assumption to begin with.

Now here is the problem: a very significant portion of our worldviews, even the most hard-nosed scientific ones, is based just on this type of inductive inferences unsupported by a proper reference class. In fact, science itself is based on this kind of inductive inferences: after all, they are the only way to claim that the same laws and dynamics empirically observed under laboratory conditions apply to reality at large, over time and across space.

Inductive inferences motivated only by paradigms, instead of empirically-derived reference classes, lead to worldviews that are at least as much a reflection of our own thoughts (and limitations of thought) as they are a reflection of a supposedly objective nature. We live in a reality largely defined by a paradigm – a set of beliefs – as opposed to objective, empirical facts. This may reflect a level of unconscious closed-mindedness and sheer naïveté that one day may profoundly surprise us.

Schizophrenic idealism

The Knight's Dream, 1655, by Antonio de Pereda. Source: Wikipedia.

The philosophy of idealism, defended through the ages by great minds like those of George Berkeley, Immanuel Kant, Georg Hegel, Gottfried Leibniz, and John McTaggart, entails that all reality is ultimately just a conscious experience. In other words, unlike realism – which postulates an external, objective world 'out there' triggering our perceptions – idealism postulates the existence of nothing but our conscious perceptions themselves. As such, idealism is a much more parsimonious and cautious worldview. Yet, somehow, realism has come to completely dominate the worldview of our culture. Most of us hardly question the assumption that there is a reality 'out there' independent of our minds; that is, that nature would still go merrily on even if nobody were looking. Leaving aside the scientific evidence to the contrary, one wonders why realism has come to be synonymous with our culture’s collective intuition of reality.

The problem is that most people, when considering the hypothesis of idealism, hardly think it through consequently. And in pondering just a half-baked, 'schizophrenic' version of idealism, contradictions arise that seem to render it untenable. This is not a sign of lazy thinking or stupidity on the part of any one of us; it’s a side-effect of the cultural fog we live immersed in. You see, in meditating about idealism most of us still unconsciously retain some key assumptions of realism. It is these hidden, unconscious assumptions that give rise to the contradictions, not idealism itself. For instance, we tend to retain the assumption that minds are inside brains. And then, given that brains are clearly separate from one another, a contradiction arises. After all, if reality is only in the 'mind' (meaning, only in the brain), how come we all share the same reality? That doesn’t seem possible; reality must be external to minds so we can all look at the same reality from the perspective of different brains. There seems to be no other possible explanation for the fact that we all seem to share the experience of a common reality. Therefore, idealism must be a fallacy.

The argument above is malformed and wrong. It judges idealism while assuming key features of realism. Namely, it assumes that minds are inside objective structures of an external reality: brains. But according to idealism there are no such things as objective structures in a reality external to mind; instead, it’s all in the mind. So the mind is not in the brain; it’s the brain that is in the mind. The dream is not in the body; it’s the body that is in the dream. As such, bodies and brains can be seen as space-time anchors for a certain point-of-view taken by mind within a kind of palpable, continuous dream. The fact that brains are separate from each other in the canvas of such dream says absolutely nothing about the limitations of mind as far as coordinating a dream shared by its many points-of-view in a very consistent manner. When an idealist says that 'it's all in here,' pointing at his head, he is at best expressing himself metaphorically and, at worst, being unconsciously inconsistent with his own position. To a true idealist, reality is not in the head; it's the head that is in the mind.

Ultimately, the dichotomy idealism-versus-realism may be no dualism at all. To say that everything is a construct within a mind is not to deny any of the qualities of experience: the concreteness, solidity, or continuity of things. This form of monistic idealism does not deny physics insofar as the latter entails models for predicting how things behave empirically; it only denies some of our ontological assumptions about how our experience of such behaviors comes into being. In other words, monistic idealism questions only our myths and stories, not our empirical observations. Such non-dualistic view entails merely that the spectrum of qualities normally associated to constructs of the imagination extends further beyond our ordinary intuition – as far as their potential concreteness, solidity, and continuity – than we ever dared think.

I wanted to write this article today to mark the release of my second book, Dreamed up Reality, in the next few days. I wanted to give you a taste of the key idea I dwell upon in it; the idea that, ultimately, all data about reality – about what may or may not be going on – resides in the mind. From a strict epistemic perspective, the 'external' world is a story we tell ourselves; a non-provable myth, reasonable and self-consistent as it may appear. As such, if one wants to set out on a path of exploration unhindered by the cultural fog we live in, one must go back to basics and start from within the mind: What does one really know from experience and what is, instead, myth and story-telling? This was my original attempt and I now decided, through my new book, to share that story.