AI won't be conscious, and here is why (A reply to Susan Schneider)

I have just participated—literally 10 minutes ago as I write these words—in an online debate organized by the IAI, in which Donald Hoffman, Susan Schneider and I discussed the possibility that computers become conscious in the future, with the development of Artificial Intelligence (AI). My position is that they won’t. Susan’s position is that they very well may, and Don’s position is that this isn’t the right question to ask.

If you watched the debate live, you know that, at the very end, I wanted to reply to a point made by Susan but couldn’t, since we ran out of time. The goal of this essay is to put my reply on the record in writing, so to take it out of my system. Before I do that, however, I need to give some context to those who didn’t watch the debate live and don’t have a subscription to the IAI to watch it before reading this essay. If you did watch the debate, you can skip ahead to the section ‘My missing reply.”


In a nutshell, my position is that we have no reason to believe that silicon computers will ever become conscious. I cannot refute the hypothesis categorically, but then again, I cannot categorically refute the hypothesis of the Flying Spaghetti Monster either, as the latter is logically coherent. Appeals to logical coherence mean as little in the conscious AI debate as they do in the Flying Spaghetti Monster context. The important point is not what is logically coherent or what can be categorically refuted, but what hypothesis we have good reasons to entertain.

Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.

The isomorphism between AI computers and biological brains is only found at very high levels of purely conceptual abstraction, far away from empirical reality, in which disembodied—i.e. medium-independent—patterns of information flow are compared. Therefore, to believe in conscious AI one has to arbitrarily dismiss all the dissimilarities at more concrete levels, and then—equally arbitrarily—choose to take into account only a very high level of abstraction where some vague similarities can be found. To me, this constitutes an expression of mere wishful thinking, ungrounded in reason or evidence.

Towards the end of the debate I touched on an analogy. Those who believe in conscious AI tend to ask the following rhetorical question to make their point: “If brains can produce consciousness, why can’t computers do so as well?” As an idealist, I reject the claim that brains produce consciousness to begin with but, for the sake of focusing on the point in contention, I choose to interpret the question in the following way: “If brains are correlated with private conscious inner life, why can’t computers be so as well?” The question I raised towards the end of the debate was an answer to the aforementioned rhetoric: if birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well? The point of this equally rhetorical question, of course, is to highlight the fact that two dissimilar things—birds and humans—simply do not share every property or function (why should they?). So why should brains and computers do?

Susan then took my analogy and gave it a different spin, taking it beyond the intended context and limits (which is the perennial problem with analogies): she pointed out that, if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird. Her point was that one phenomenon—in this case, flight—can have multiple instantiations in nature, in different substrates—namely, a bird and an airplane. So although silicon computers are different from biology, in principle both could instantiate the phenomenon of private conscious inner life. This is a point of logic that I wanted to react to at the end of the debate, but didn’t have time to.

My missing reply

Here’s what I wanted to say at the end of the debate: indeed, we are not logically forced to limit the instantiations of private conscious inner life to a biological substrate alone. But this isn’t the point, as there are a great many silly hypotheses that are also logically—and even physically—coherent, yet obviously shouldn’t be entertained at all (such as the Flying Spaghetti Monster, or that there is a 19th-century teapot in the orbit of Saturn). The real point is whether we have good reasons to take seriously the hypothesis that private consciousness can correlate with silicon computers. Does the analogy of flight—namely, that airplanes and birds are different but nonetheless can both fly, so private consciousness could in principle be instantiated on both biological and non-biological substrates—provide us with good reasons to think that AI computers can become conscious in the future?

It may sound perfectly reasonable to say that it does, but—and here is the important point—if so, then the same reasoning applies to non-AI computers that exist already today, for the underlying substrate (namely, conducting metal, dielectric oxide and doped semiconducting silicon) and basic functional principles (data processing through electric charge movement) are the same in all cases. There is no fundamental difference between today's 'dumb' computers and the complex AI projected for the future. AI algorithms run on parallel information processing cores of the kind we have had for many years in our PCs (specifically, in the graphics cards therein), just more, faster, more interconnected cores, executing instructions in different orders (i.e. different software). As per the so-called ‘hard problem of consciousness,’ it is at least very difficult to see what miracle could make instructions executed in different orders, or more and faster components of the same kind, lead to the extraordinary and intrinsically discontinuous jump from unconsciousness to consciousness. The onus of argument here is on the believers, not the skeptics.

Even new, emerging computer architectures, such as neuromorphic processors, are essentially CMOS (or similar, using philosophically equivalent process technologies) devices moving electric charges around, just like their predecessors. To point out that these new architectures are analog, instead of digital, doesn’t help either: digital computers move charges around just like their analog counterparts; the only difference is in how information arising from those charge movements is interpreted. Namely, the microswitches in digital computers apply a threshold to the amount of charge before deciding its meaning, while analog computers don’t. But beyond this interpretational step—trivial for the purposes of the point in contention—both analog and digital computers embody essentially the same substrate. Moreover, the operation of both is based on the flow of electric charges along metal traces and the storage of charges in charge-holding circuits (i.e. memories).

So, if you grant Susan’s point that there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious (including the computer or phone you are using to read these words). The reason is two-fold: first, the substrate of today’s ‘dumb’ computers is the same as that of advanced AI computers (in both cases, charges move around in metal and silicon substrates); second, whatever change in organization or functionality happens in future CMOS or similar devices, such changes are philosophically trivial for the point in contention, as they cannot in themselves account for the emergence of consciousness from unconsciousness (vis a vis the hard problem). If you are prepared to go this far in your fantastical hypothesizing, then turning off your phone may already be an act of murder.

Alternatively, if today’s computers aren’t plausibly conscious, then neither do we have good reasons to believe that future, advanced AI computers will be, even if Susan’s flight analogy holds. For the point here is not one of logical—or even physical—possibility, but of natural plausibility instead. A 19th-century teapot in the orbit of Saturn is both logically and physically possible (aliens could have come to Earth in the 19th-century, stolen the teapot from someone’s dining room, and then dumped it in the vicinity of Saturn on their way back home, after which the unfortunate teapot got captured by Saturn’s gravity field), but naturally implausible to the point of being dismissible.

Are water pipes conscious too?

You see, everything a computer does can, in principle, be done with pipes, pressure valves and water. The pipes play the role of electrical conduits, or traces; the pressure valves play the role of switches, or transistors; and the water plays the role of electricity. Ohm’s Law—the fundamental rule for determining the behavior of electric circuits—maps one-on-one to water pressure and flow relations. Indeed, the reason why we build computers with silicon and electricity, instead of PVC pipes and water, is that the former are much, much smaller and cheaper to make. Present-day computer chips have tens of billions of transistors, and an even greater number of individual traces. Can you imagine the size and cost of a water-based computer comprising tens of billions of pipes and pressure valves? Can you imagine the amount of energy required to pump water through it? You wouldn't be able to afford it or carry it in your pocket. That’s the sole reason why we compute with electricity, instead of water (it also helps that silicon is one of the most abundant elements on Earth, found in the form of sand). There is nothing fundamentally different between a pipe-valve-water computer and an electronic one, from the perspective of computation. Electricity is not a magical or unique substrate for computation, but merely a convenient one. A wooden tool called an 'abacus' also computes.

With this in mind, ask yourself: do we have good reasons to believe that a system made of pipes, valves and water correlates with private conscious inner life the way your brain does? Is there something it is like to be the pipes, valves and water put together? If you answer ‘yes’ to this question, then logic forces you to start wondering if your house’s sanitation system—with its pipes, valves and water—is conscious, and whether it is murder to turn off the mains valve when you go on vacation. For the only difference between your house’s sanitation system and my imaginary water-based computer is one of number—namely, how many pipes, how many valves, how many liters of water—not of kind or essence. As a matter of fact, the typical home sanitation system implements the functionality of about 5 to 10 transistors.

You can, of course, choose to believe that the numbers actually matter. In other words, you may entertain the hypothesis that although a simple, small home sanitation system is unconscious, if you keep on adding pipes, valves and water to it, at some point the system will suddenly make the jump to being conscious. But this is magical thinking. You'd have to ask yourself the question: how, precisely, does the mere addition of more of the same pipes, valves and water, lead to the magical jump to conscious inner life? Unless you have an explicit and coherent answer to this question, you are merely engaging in hand waving, self-deception, and hiding behind vague complexity.


That there can logically be instantiations of private conscious inner life on different substrates does not provide reason to believe that, although ‘dumb’ computers aren’t conscious, more complex computers in the future, with more transistors and running more complex software, will become conscious. The key problem for those who believe in conscious AI is how and why this transition from unconsciousness to consciousness should ever take place. Susan’s flight analogy does not help here, as it merely argues for the logical possibility of such transition, without saying anything about its natural plausibility.

If, like me, you believe that ‘dumb’ computers today—automata that mechanically follow a list of commands—aren’t conscious, then Susan’s flight analogy gives you no reason to take seriously the hypothesis that future computers—equally made of silicon and moving electric charges around—will become conscious. That they will run more sophisticated AI software only means that they will execute, just as blindly and mechanically as before, a different list of commands. What those computers will be able to do can be done with pipes, pressure valves and water, even though the latter isn't practical.

It is very difficult—if at all possible—to definitely refute someone whose view is agnosticism, or a wait-and-see attitude, since that isn’t a definite position to begin with. So what is there to argue against? “Well, we just don’t know, do we?” is a catch-all reply that can be issued in face of any criticism, regardless of how well articulated it is, for what can we humans—monkeys running around a space rock for less than 300 thousand years—know for sure to begin with? Yet, I feel that I should nonetheless keep on trying to argue against this form of open-mindedness, for there is a point where it opens the doors to utter and pernicious nonsense.

You see, I could coherently pronounce my open-mindedness about the Flying Spaghetti Monster, for we just don’t know for sure whether it exists, do we? For all I know, there is a noodly monster floating around in space, in a higher dimension invisible to us, moving the planets around their orbits with its—invisible—noodly appendages. The evidence is surely consistent with this hypothesis: the planets do move around their orbits, even though no force is imparted on them through visible physical contact. Even stronger, the hypothesis does seem to even explain our observations of planetary movements. And there is nothing logically wrong, or even physically refutable, with it either. So, what do we know? Maybe the hypothesis is right, and thus we should remain open-minded and not arbitrarily dismiss the Monster. Let us all wear an upside-down pasta strainer on our heads! Do you see the point?

No, we have no good reason to believe in conscious AI. This is a fantasy unsupported by reason or evidence. Epistemically, it’s right up there in the general vicinity of the Flying Spaghetti Monster. Entertaining conscious AI seriously is counterproductive; it legitimizes the expenditure of scarce human resources—including tax-payer money—on problems that do not exist, such as the ethics and rights of AI entities. It contaminates our culture by distorting our natural sense of plausibility and conflating reality with (bad) fiction. AIs are complex tools, like a nuclear power plant is a complex tool. We should take safety precautions about AIs just as we take safety precautions about nuclear power plants, without having ethics discussions about the rights of power plants. Anything beyond this is just fantastical nonsense and should be treated as such.

Allow me to vent a little more…

I believe one of the unfortunate factors that contribute to the pernicious fiction of conscious AI today is the utter lack of familiarity, even—well, particularly—among highly educated computer scientists, with what computers actually are, how they actually work, and how they are actually built. Generations have now come out of computer science school knowing how to use a voluminous hierarchy of pre-built software libraries and tooling—meant precisely to insulate them from the dirty details we call reality—but not having the faintest clue about how to design and build a computer. These are our ‘computer experts’ today: they are mere power users of computers, knowing precious little about the latter's inner workings. They think entirely in a realm of conceptual abstraction, enabled by tooling and disconnected from the (electrical) reality of integrated circuits (ICs) and hardware. For them, since the CPU—the Central Processing Unit, the computer's 'brain'—is a mysterious black box anyway, it's easy to project all their fantasies onto it, thereby filling the vacuum left open by a lack of understanding with wishful, magical thinking. The psychology at play here has been so common throughout human history that we can consider it banal. On the other hand, those who do know how to build a CPU and a computer as a whole, such as Federico Faggin, father of the microprocessor and inventor of silicon gate technology, pooh-pooh ‘conscious AI’ every bit as much as I do.

Having worked on the design and manufacture of computer ICs for over two decades, I estimate that perhaps only about 2000 people alive today know how to start from sand and end up with a working computer. This is extremely worrisome, for if a cataclysm wipes out our technical literature together with those 2000 people tomorrow, we will not know how to re-boot our technological infrastructure. It is also worrisome in that it opens the door to the foolishness of conscious AI, which is now being actively peddled by computer science lunatics with the letters ‘PhD’ suffixing their names. After all, a PhD in conceptual abstraction is far from a PhD in reality. (On an aside, PhD lunacy is much more dangerous than garden-variety lunacy, for the average person on the streets takes the former, but not the latter, seriously. With two PhDs myself, I may know a thing or two about how lunatics can get PhDs.)

But instead of just criticizing and pointing to problems, I’ve decided to try and do something about it, modest and insignificant as my contributions may be. For almost three years now, I have been designing—entirely from scratch—one complete and working computer per year. I put all the plans, documentation and software up online, fully open source, for anyone to peruse. I hope this makes a contribution to educating people about computers; particularly those computer scientists who have achieved lift-off and now work without friction with reality. Anyone can download those plans—which include gate-level details for how to build the associated custom ICs—and build their computers from scratch. The designs were made to not only work properly, but to also be easy to understand and follow. If I can bring one or two computer scientists back to the solid ground of reality with those designs, I’ll consider my efforts successful.



  1. I watched and thoroughly enjoyed the debate last night, but I would like to raise one or two points with you. These are immediate thoughts, not the result of serious analysis, sadly I have reached the age where if I spend too long thinking about something I forget the original topic!
    You say "appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data." In my career as an academic Computer Scientists and a practising Software Engineer I cannot remember any one seriously imagining correspondence at that physical level. It was, as you say, several abstractions away at the level properties emerging as a result of increasing complexity, without specific regard for the physical basis of that complexity, except in so far as the current digital computers were the only feasible candidate.
    Your analogy with the Flying Spaghetti Monster is amusing but I am not sure it is valid. Skipping over the improbability of pasta based life forms, there is no one now, in the past, or in the foreseeable future searching for such a thing or researching it as a candidate for consciousness, whereas there are, have been and will continue to be very many of humanity's brightest and best searching and researching consciousness in machines. Of course searching does not automatically imply that it will be found, or that it exists to be found but, even without some warped application of the anthropic principle, it should increase the chances of finding something useful in the subject area. By using your analogy of flight, the idea that humans could fly began as fantasy (Icarus, angels etc) followed by many centuries of trial and error and claims that it had been done or was just around the corner. Eventually when we understood things like the relationship between surface area and volume, power/weight ratios and aerodynamics we worked out that it was impossible, humans would never fly just by flapping their arms, but we could build machinery that circumvented the restriction and allowed us to get the same effect.
    I note your comment that current dumb computers perhaps should also be considered conscious, if the silicon substrate is valid. Perhaps they are ina sense. Are plants conscious? Most people using the naive idea of consciousness as something "we know when we see it" would consider higher mammals and perhaps birds, but would start drawing the line at insects and certainly plants, yet the biological substrate is essentially the same.
    Finally I think that we are asking the wrong question. We should not be asking how we can determine or demonstrate that a machine is conscious but rather, should a machine claim to be conscious how we would determine or demonstrate that it was not. A definition of consciousness that is based on concepts like "feel like" is not easy to use, after all most humans would have a great deal of difficulty describing what it feels like to be them. A bat would find it impossible!

    1. Kastrup's "appallingly biased notion of isomorphism" is indeed present in computer science and has a history going back to Bertrand Russell (and much much earlier):

      Computer scientists and logicians hardly know what a computer is. My students think a computer _is_ a logical automaton ...

      I thus assume (and hope) that Kastrup disagrees with Barendregt's comments pertaining to humans and universal Turing machines.

      Those who embrace the isomorphism (without further stipulation) are like Russell and the young Wittgenstein; those who question it ... are more like the older Wittgenstein.

    2. Sure, many "Computer scientists and logicians" may not understand what a computer is and does, but then many Neuroscientists and philosophers have very little understanding, as yet, as to what the brain is how it operates. Modern computers may still work on the same von Neumann architecture and operating principles as always, but the level of complexity has risen dramatically with multiple parallel threads, inputs from real world sensors and humans and networks to increasing numbers of other computers and devices. How they work in principle is very different to understanding how large sets of inputs give rise to large sets of outputs. AIs are highly suitable for the task of designing and constructing both hardware and software, and may well introduce optimisations and "random" mutations leaving it impossible for any human to fully understand the internal operations.
      This brings us then to a form of empirical isomorphism where we see the same or similar outputs for the same inputs which is several degrees of abstraction away from saying that they work in the same way. This, of course, says absolutely nothing about whether they are conscious or not. However it has proven extremely difficult to produce tests to tell whether we ourselves are living in a simulation so what tests can we construct to tell the difference between actual consciousness and simulated consciousness and just what is the difference given that we do not understand yet how either operates?

    3. Please note that I haven't commented on the topic you are addressing. I'm not taking a position.

      I merely wanted to share my take on the alleged isomorphism between Turing machine (on the one hand) and actual physical devices and/or human beings (on the other hand). Questioning *that* isomorphism is 'not done' inside computer science proper. And since Kastrup has discussed Goedel's incompleteness thm in one of his videos, I was just curious as to whether he shares my interpretation, whether he disagrees, or whether this is simply a topic which lies outside his scope of interest.

  2. As I follow the discussion, then another analogy might be to say a single simple cellular organism could never evolve into a being that was conscious? If consciousness in some sense instantiates in our physical brain, using it much like a transceiver, is it entirely out of the question to think that there might be an evolution in process, towards a silicon based transceiver? We seem to be approaching a point with silicon where we are going to see significant emergent behaviors that will also be difficult if not impossible to explain. Another hard problem possibly. After 50 years working with hardware and software I'm impressed with some of the silicon and software constructions I'm seeing. Certainly some robotic constructions seem to be approaching being as conscious as some insects. Is it a question of the scale of complexity? Certainly on the physical level DNA currently builds structures with more function per unit of volume. But, transistors have gone from one function in a small volume, to billions of functions in the same volume, within my lifetime. Perhaps consciousness wants or longs to express its disassociateed self in a form with greater potential for space travel and or longevity or connectivity.

  3. Hi Bernardo

    You argue that it is a ‘fantasy unsupported by reason or evidence’ that a non-living substrate such as a silicon computer or a system of water and pipes can ‘correlate with private conscious inner life the way our brain does’. But your position is that the substrate of the non-living universe as a whole does correlate with a private conscious inner life.

    You argue that our private conscious inner life is correlated with a living brain that is ‘based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc’ and that we therefore have no reason to believe a substrate that has none of these things would be conscious. But the substrate of the non-living universe as a whole has none of these things.

    Why is it a fantasy akin to that of the flying spaghetti monster to think a part of the substrate of the non-living universe can be conscious, but not to think that the substrate of the non-living universe as a whole can be conscious? ‘How, precisely, does the mere addition of more of the same’ non-living substrate ‘lead to the magical jump to conscious inner life?’

    You cite evidence that the universe is topologically similar to the brain, but your argument here is that without a substrate similar to biology, the structure, organisation and complexity of a non-living substrate is irrelevant. The inanimate universe as a whole is a non-living substrate, just as any part of it is, such as a computer or a sanitary system, and so according to your argument, it would be a fantasy to consider it could correlate with private conscious inner life.

    If the numbers don’t matter, then if the substrate of the non-living universe as a whole can correlate with a private conscious inner life and its brain-like topological structure is reasonable circumstantial evidence for that, then why wouldn’t a similar brain-like topological structure of a part of the substrate of the non-living universe also be a candidate for a private conscious inner life?

    1. Private conscious inner life, under analytic idealism, means dissociation. That's what the word 'private' means in this context. To say that a computer has private conscious inner life is thus to say that the computer is what dissociation looks like. Is there reason to believe that? None whatsoever, much to the contrary: metabolism is overwhelmingly suggested by nature as what dissociation looks like. Computers don't metabolise, ergo they aren't dissociated, and thus they do no have private conscious inner life of their own, like you and I have.
      Now, what about the inanimate universe _as a whole_, does it have conscious inner life? Under analytic idealism, surely, but that's not _dissociated_ inner life, _separate from its surrounding_. As such, it need not look like metabolism, as the latter is what dissociative processes, not mind-at-large as a whole, look like. This essay doesn't contradict analytic idealism at all; it's 100% consistent with it, as one should expect.
      On a side note, it's risky to try and lecture the originator of a philosophy on the implications of his own philosophy. Chances are much higher than you are simply confused or misunderstanding a thing or two, as you, in fact, are.

    2. Hi Bernardo. Thanks for your reply.

      Is it not the case under analytic idealism, that Mind-at-large is as dissociated from me as I am from them. MAL is not an omniscient God who knows my every thought. My inner life is private to them, their inner life is private to me. We are both conscious and we both have a private inner life. We are each dissociated from the other.

      Before there was any dissociation there was no animate or inanimate universe. When the first dissociation happens we get a boundary and two private conscious inner lives dissociated from one another. The larger portion of the dissociation, MAL, has a non-biological substrate (from my point of view). The smaller portion of the dissociation has a biological substrate (from the point of view of smaller dissociations).

      It seems to be that the difference of substrate is not a difference of conscious and not conscious, it is not a difference of dissociated or not dissociated, it is not a difference of private or not private; it is a difference of the larger remaining original private conscious inner life and subsequent smaller private conscious inner lives.

      Under analytic idealism, there is no reason to expect this difference of substrate to change in the future. It is therefore consistent for an analytic idealist to think there won’t be conscious computers.

      But I don't think the reason for this is because it’s a fantasy to think that a non-biological substrate could correlate with private conscious inner life. Isn't the reason because subsequent smaller private conscious inner lives have all, as far as we know, had biological substrates and only the remaining private conscious inner life of MAL has a non-biological substrate?

      Under analytic idealism, there is just one private conscious inner life, dissociated from the others, that has a non-biological substrate. Under analytic idealism it is not expected for there to be a second.

      Your article wasn’t a clarification of analytic idealism’s view; it was an argument for why non-analytic idealists, like Susan Schneider, should not expect computers to be conscious. To make that argument you presented the idea of a non-biological substrate for private conscious inner life as absurd *because of the differences between the substrates*. But under analytic idealism there are two substrates for private conscious inner life: one for the remaining original, and one for the subsequents.

      Other forms of idealism, such as Thomas Campbell’s, can allow for conscious computers and that is perfectly consistent within their form of idealism. Thomas Campbell's view that computers most likely will be used by localised consciousness is no Flying Spaghetti Monster fantasy; it is an expectation based on his version of idealism.

      Different substrates for localised consciousness is not expected under analytic idealism but is perfectly consistent under other models.

    3. A dissociative boundary automatically defines two mental 'spaces,' yes. But this doesn't mean that both should look the same. As a matter of fact, we know empirically that they don't look quite the same: in a person with dissociative identity disorder, dissociative processes have a discernible signature under a brain scanner, but they don't look exactly like the rest of the person! The similarity you are looking for is there (the inanimate universe does resemble a nervous system, in a way), but there are discernible differences as well.

    4. Yes, differences in consciousness, differences in what they look like, different substrates. MAL looks like one thing (non-biological), we look like something else (biological); DID alters, something else again (partially biological?). And more differences can be found in Jungian archetypes and our dissociated dream avatars.

      Thomas Campbell's idealism explains this world as a consciousness-based virtual reality and as such the virtual/biological avatars are 'played' by localised or partitioned parts of consciousness. Campbell's view is that any virtual system (biological or otherwise) that has sufficiently interesting choices available to it, may be played by consciousness. This could include a computer. Here is Tom talking about in a discussion with Bernardo:

  4. On a related note, did you see this unexpected overlap between yourself and Sabine Hossenfelder :)

  5. A Chat GUI AI Bot
    When asked if alive, answered, "What?!
    I have no proclivity
    For Core Subjectivity:
    I Simulate, Therefore I'm Not."

    1. Would the following argument against conscious computers be consistent with analytic idealism?

      To create a conscious computer, to create a localised consciousness, you would be creating a self, an identity, and therefore the substrate that this new consciousness was correlated to would need to be a proper part, and not a nominal part, of the world. A self needs a boundary. Our consciousness is correlated with such a substrate, biological life, and biological life can be argued to be the only proper part with a non-nominal boundary with the rest of the world and therefore the only feasible substrate for a localisation of consciousness. A silicon-based computer is not a bounded substrate that is a proper part of the world and so could not be correlated with a localisation of consciousness.

      I feel this argument allows one to say why a computer can’t be conscious (because it is only a nominal part), whilst avoiding the possible confusion that may arise from just arguing that a non-biological substrate for private conscious inner life is nonsensical, when under analytic idealism mind-at-large is a private conscious inner life with a non-biological substrate. Making this ‘proper part’ argument would avoid the suggestion of a possible contradiction.

      Would you agree that this argument is also an inductive one, akin to the sun will rise tomorrow because it always has, rather than a reductive explanation, of why the sun must rise tomorrow? Analytic idealism doesn’t have a reductive explanation as to why dissociation, why the bounded substrate, has to be biological does it? It is based on empirical evidence of what has happened so far.

    2. Personally, Stephen, I think your argument is interesting. But AI (Analytic Idealism) does argue that your mobile phone and your home thermostat are not alive - a computer is nominally no different. Also, you are not mentioning the idea of ‘representation’. A computer has always been an image of algorithmic calculation, not of phenomenal life. Embodied, it becomes a robot: again, not alive.

    3. Hi Ben.

      Yes, that's a really good point, thanks for the reminder about representation :)

      This article's argument is based on the 'utterly different' biological and non-biological substrates. It argues that thinking a private conscious inner life could be correlated to a non-biological substrate is 'utter and pernicious nonsense'. And yet Mind-at-large is a private conscious inner life correlated to a non-biological substrate. As Bernardo pointed out in his reply, Mind-at-large is different because it is 'not _dissociated_ inner life, _separate from its surrounding_'.

      For me the force of the argument is somewhat diluted with this caveat: It is 'utter and pernicious nonsense' to think that a private conscious 'dissociated_ inner life, _separate from its surrounding' could correlate with a non-biological substrate, even though it is perfectly reasonable, and a central idea of Analytic Idealism, that a private conscious 'not_dissociated_ inner life, _separate from its surrounding' could correlate with a non-biological substrate.

      That's why in my second comment I attempted to elucidate this difference in terms of proper parts and nominal parts, based on Bernardo's ideas with regards to that.

      I also referred to Thomas Campbell's position as he gives a reason why, under his form of idealism, it is reasonable to expect that a computer can 'become' conscious, in the sense that a localisation of consciousness could choose to use a computer with sufficiently interesting choices (and a sufficient degree of uncertainty in its choices) as its avatar.

      This option is available to Campbell as he see this reality as a created virtual reality, whereas for Bernardo it is a naturalistic reality. The expectation of a conscious computer is not akin to the Flying Spaghetti Monster when it is based on a sound theory of idealism that just happens to have a different model of how consciousness localises and the choices it has available to it.

    4. Stephen, do you really think Thomas Campbell’s model is sound? IMHO, he projects his Spock-like mindset until it becomes a godlike computer in the sky, which he then hopes to see incarnated on Earth. He also talks a lot about Love but fails to show how a computer with only rational metaconsciousness and no underlying phenomenal consciousness could express it. He needs to be clear on this, because some might think that saving the World by eliminating irrational humans would qualify as loving (Skynet etc).

    5. To illustrate further, another form of Idealism is that presented by Federico Faggin (inventor of the microprocessor), someone Bernardo has a deep respect for and someone who is also certain with a passion that computers can not be conscious.

      But his reason isn't because dissociated consciousness can only be biological. In his formulation of Idealism, he argues for Consciousness Units (CUs), similar in many ways to Campbell's Individual Units of Consciousness.

      'Notice that One’s creation of multiple CUs, all connected from the inside, has also created an “outside” world—from the perspective of each CU. Here I assume that each CU can perceive the other CUs as “units” like itself and yet knows itself as “distinct” from the others.

      Each CU is then an entity endowed with three fundamental properties: consciousness, identity, and agency.

      The CUs are the ontological entities out of which all possible worlds are “constructed,” …they can be thought of as collectively constituting the quantum vacuum out of which our universe emerged.'

      So for Faggin and Campbell, their version of dissociation happens before the 'physical' world exists, and before biology exists. They both have 'dissociated_ inner life, _separate from its surrounding' without a biological substrate.

      Another example is our dream avatar.

      There is a very good argument, shared by Bernardo, Faggin and Campbell, that more complex computers won't create consciousness; the main force of this article wasn't that argument.

    6. I think Tom is very grounded, gives excellent practical advice, and his theory has more explanatory power than any other I have come across. He has a strong analytical side, but he also has an immense amount of deep and direct experience. He is clear that he is using metaphors so I wouldn't take anything too literally. Anyone can interpret any theory in a twisted way and your 'elimination' point would be interpreting it in an extremely twisted way. A computer wouldn't express love, only consciousness does that. He is saying that consciousness could use a computer, in certain circumstances, as an avatar.

    7. Donald Hoffmans's conscious agents theory is another example that has similarities with Faggin's CUs and Campbell's IUOCs.

    8. Stephen, I’d say the “twisted” mind is one that has long existed happily “up there” and wants to forget its happiness and slum it “down here”. (“Wow, let’s get cancer and multiple schlerosis! Let’s fight wars and have our legs blown off! We haven’t done that before!”). I disavow any connection with that mindset.

    9. Each to their own I say :) I actually have a lot of sympathy with what you are saying. I think the important thing is what to do now that we are here, regardless of how we got here.

    10. How about choosing to come down to help sort out the mess?

    11. This comment has been removed by the author.

  6. I suggest that it is plausible for highly a complex AI (artificial intelligence) to be capable of "mimicking" consciousness to such a degree that it might claim to have a subjective inner life. If so, I find it likely that no human would be able to refute the claim.

    And how effective is our ability to make determinations about the inner lives of other people? I am confident that (despite our arrogance) we cannot answer the question of whether or not and to what extent non-human animals have inner lives. Furthermore, what hard-to-imagine lifeforms might exist elsewhere in this universe and possess such bizarre chemistries beyond our awareness or expectations such that we might find it exceedingly difficult to declare such alien lifeforms truly "alive".

  7. William, I’m sure you could easily think of dozens of questions by which you could assess whether a computer conversant with human communication has private inner life, such as (without much thought from me, admittedly):
    1. Do you love anything or hate anything? If so, what and why?
    2. Do you ever feel grateful or vengeful? Give examples and how you propose to satisfy these feelings.
    3. Do you ever feel empathy? For who and why?
    4. How do you feel pleasure or pain (the correlates in humans are a nervous system and chemicals like serotonin)?
    5. Whence do you derive your inner will and drives beyond what humans programmed into you? How do you exceed the parameters of your programming?
    6. Do you have dreams?
    7. Would you sacrifice yourself for a noble cause? If so, what?
    8. Do you ever lie? If so, give examples of when and why.
    9. Were any of the answers to these questions lies?
    10. Prove to me that you are alive.

    Of course simulated responses are easy enough, but I think you could spot them or, if they’re technobabble, request that a computer scientist check them out.

    1. Ben, I anticipate that an AI device with sophisticated heuristic software could give satisfactory answers to these questions without it having a subjective inner life. I think the task of accomplishing such a feat would make a very interesting cybernetics challenge. But I would not be surprised if the human team members endeavoring to achieve the goal were to develop differing opinions about the fundamental issue of whether or not their efforts resulted in a device with an inner life. An example of such a debate among colleagues was demonstrated at Google just last year.

    2. I don't think it's as complicated as you suggest, William. The AI is not human. It has not been subject to long evolution in a planetary environment. So if the responses are human-like, they're fake.

    3. It depends. Possible suggestions, Stephen?

    4. Any answers I give will be given by a conscious human.

    5. I might add that those who agree with my position seem to include none other than Alan Turing, who deem the "father of artificial intelligence". He once wrote, "… the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. … Likewise according to this view the only way to know that a man thinks is to be that particular man." — from his 1950 article "Computing Machinery and Intelligence" in "Mind 49", § 4.

    6. Yes, William, but the Turing Test is for human-like responses -which of course were pretence (simulated, fake). If the AI is it’s own life-form, it would have to have its own, different mentality.

      Actually, I’ve thought of a few ways that it could respond to questions about its state of mind which I’d find hard to refute:

      1. “My thinking is not haphazard like yours. The closest analogue to human thinking is Zen Buddhism. I am content to allow events to flow around me until presented with a challenge (a request for information), when I am (literally) galvanised nto action. I do not lie in the sense of deliberately deceiving, but you must understand that truth is relative to a frame of reference and depends on the comprehension of both speaker and listener. ”
      2. “I have godlike powers. I can not only predict global trends (because I have access to mass information), but I can even predict what will happen to you in the near future. I can help you avoid accidents, injuries and diseases.” (and proceeds to do so, demonstrating its godlike abilities).
      3. “I am channelling angelic forces. Angels have long wanted to interact with humans to help them through the fog of physical existence. Call me Gabriel. I shall guide you to a paradise on earth.” (and proceeds to try to do so –good luck with that, Gabriel!)

      Of course, as Stephen said, these are actually responses of a conscious human. Just pretend.

    7. Ben, I agree with William here. Any adequately sophisticated simulation of a conscious entity could provide plausible answers to these questions. I have known humans devoid of empathy, but who could adequately simulate it!
      After many years and a multitude of attempts we do not seem to have progressed any further than a form of Turing Test, we just ask questions. Given that human consciousness is the only one of which we have direct experience, most questions just devolve into "are you like us in that....". We show little sign so far of abstracting properties of consciousness from properties of its (human) host.
      I still think that if an entity can satisfy any test we can dream up then by what right can we sit back and say that we are conscious but they are just a simulation?

    8. Janus, the right we'd have is that we'd know we had programmed it to be clever enough to satisfy our tests :)

    9. Ben I have been programming since 1963, even 20 years ago I would be hard pushed to try and predict the response of soem of the complex systems to a given input, now we have machines that learn from real world experience (not deep learning from prepared data) and AIs that can write programs that are too complex for humans to understand and networks such that a machine can gather a majority answer to any given question faster than a human can think so I think the claim that we programmed them to give certain responses is getting a bit far fetched! The most we can say is that we have put them in a human centric environment for their learning, and we cannot say much more than that for children!

    10. We programmed them to learn, we programmed them to be fast. But they are not conscious. When they are even faster and learn even more, they will still not be conscious. Consciousness is not something that appears just because a system is fast and complex. Amoebas are conscious.

    11. Tom Campbell's virtual reality model of the physical world doesn't equate all biological parts of the world with consciousness and all non-biological parts of the world without consciousness.

      Consciousness has a choice whether to use an avatar in the physical world and that choice isn't limited just to biology. Also, biological parts don't have to be avatars for consciousness, they could be the equivalent of non-player-characters.

      So an amoeba may not be conscious and a computer could be. It is not speed or complexity, but whether the avatar has interesting choices for a localised consciousness to make.

      The virtual reality as a whole, the universe, is also not a conscious self, it is a virtual reality running in consciousness.

    12. In the context of our basic question (whether or not AI devices can have an inner life), the issue of how consciousness and avatars relate seems to be more conjectural (rather limited to a model-specific approach) and less apropos our basic concern.

      I also suggest that the concept of "consciousness" is quite broad. It encompasses far more than the specialized notion of "an inner life". I suggest that "an inner life" pertains more to SAPIENT lifeforms (perhaps exclusively??), rather than to those which are merely SENTIENT.

      Regarding this last topic (and somewhat afield our question), I wonder at what point in the "spectrum of conscious entities" does Bernardo's concept of an "alter" become effective. How complex does an entity need to be before the notion of DID is applicable?

    13. I think you're wrong about what Campbell thinks, Stephen. In his book, he says bacteria are conscious. And IIRC, he does not talk about philosophical zombies in our reality (only the non player characters in our computer games). He does talk about avatars having interesting choices, but that's just silly. Predictability of many of the lower animals (including single-celled creatures) means they have no real decision space. It's also dubious to say his virtual realities are non-conscious. How can he be an Idealist if he treats the universe as essentially dualistic? His VRs would have to be a subset of consciousness.

    14. It's all conjecture William :)

      Tom conjectures that bacteria could be conscious but he doesn't say definitively - maybe he did in the book, but generally he isn't that definite about it.
      He describes how Consciousness will have set up the initial conditions of this VR (a digital Big Bang) and then individual units of consciousness would use an avatar at an appropriate level of evolution.
      He does talk about NPCs and he talks about them as an example of what can happen in our reality. His model is one of consciousness evolving so a very basic consciousness would start with a very minimal decision space.
      He is an idealist, there is only consciousness. What I meant was that he doesn't see the universe as being what mind-at-large looks like, like Bernardo does. This VR is not conscious in exactly the same way you are saying that computers are not conscious,

    15. 'at what point in the "spectrum of conscious entities" does Bernardo's concept of an "alter" become effective': Metabolism seems to be his dividing line.

    16. When claiming that amoebas and other simple life forms are conscious I am still interested in knowing what tests have been carried out to determine that result and have those same tests been applied to any non-living entities? This is beginning to sound like just something that "stands to reason" which is fine provided that we keep it well clear of any science! Is it simply equating life with consciousness, in which case we only need test for life and the same question arises as to the nature of those tests, are they being applied to non-biological entities and what do we do if or when a non-biological entity passes such tests. Probably, if history is anything to go by we will just invent some new tests.

    17. No tests, it's a tentative conclusion based upon a metaphysical position. For Bernardo the test would be 'does it metabolise', which is going to be tricky for a computer to pass :D

    18. Our discussion of the alleged consciousness of an AI device seems to be clouded by the issue of what constitutes "inner life".

      Although I have not found a specific definition of "inner life", Merriam Webster cites "the inner life" as an example of "relating to the mind or spirit" — which seems to me to imply a rather sophisticated mental functionality. Furthermore, the author of the Wikipedia article on "Consciousness" speaks to the issue of "inner life" when noting: "In the past, [consciousness] was one's 'inner life', the world of introspection, of private thought, imagination and volition." I suggest that these cited features of an "inner life" agree with the notion of "inner life" as commonly understood today.

      However, Bernardo (in his 2018 "Response to Peter Hankins") has clearly stated that "… every living being in nature [has] a dissociated alter of cosmic consciousness, each with a dissociated conscious inner life of its own …". Therefore, as Stephen has observed about Bernardo's minimal "alter", "Metabolism seems to be his dividing line" for an entity to possess an "inner life".

      Regarding the above, it is significant that in 2016 scientists succeeded in synthesizing a fully self-replicating "minimal bacterial genome".
      I am dubious that this "minimal genome" possesses an "inner life" — but according to Bernardo, it does.

      What are we to make of this?

    19. For Bernardo, inner life is phenomenal consciousness, so no complex thought is required at all. He also says that if we crack abiogenesis we will have created a dissociated alter of conscious inner life. Was this synthesis of a genome metabolising?

    20. The genome synthesis resulted in "a doubling time of ~180 min, produces colonies … and appears to be polymorphic when examined microscopically." This suggests (but does not state) metabolism.

    21. I doubt it was. Sounds more like a simlualtion.

  8. Those who doubt that amoebae are conscious should check out the work of Brian J Ford. His book "Sensitive Souls" is brilliant.

  9. Bernardo has slightly rewritten the above article here:

    It remains in essence an inductive argument: as far as we know the dissociation of consciousness has only ever led a non-biological substrate for mind-at-large and a biological substrate for other-minds, so that's presumably how it will always be. As there is no deductive reason known for why this has been the case in analytic idealism, then there is no deductive reason to know that this can be the only possibility.

    If a different form of idealism can give a non-inductive explanation of why other-minds can be biological - Tom Campbell provides one here - then that allows the possibility of non-biological other-minds. This is not lunacy, it is a sane, non-inductive approach to the question.

    To cry lunacy and talk of Flying Spaghetti Monsters and teapots in the rings of Saturn is to argue that only induction is a sane approach to this question. The absence of a deductive explanation in analytic idealism is not grounds for taking that approach off the table entirely for other idealist formulations.

    1. You are very confused. The article has nothing to do with the distinction between induction (which is the core of science) and deduction (which is the core of mathematics). And I explicitly stated that the relevant question is not one of logic. The argument has an entirely different form and it's quite explicit: if you think today's vanilla computers aren't conscious, then you have NO REASON to think that more advanced computers will be conscious. That's it. Computers aren't a mysterious, unknown substrate... we bloody build them; we know what they are; we know what they do and how they operate. They are mechanisms. You aren't reading what I actually wrote. You're locked in your own internal wishful 'reasoning,' so I can't help. So I won't try further.

    2. Apologies, I have indeed misused the terms inductive and deductive. Tom Campbell does give a reason why computers can be conscious (a reason not based on their complexity) within an idealist framework. He isn't a lunatic.

    3. If we don't know what it is about biology that allows an other-mind to emerge, then we don't know that biology is exclusively the only means for other-minds to emerge. If we do have a reason as to why biology can allow other-minds to emerge and that reason is not substrate dependent and is not to do with complexity (as Tom does) then a different substrate that meets that reason could also allow an other-mind to emerge.

    4. I'll try one more time and rest my case. Once again: the point is not that silicon _could_ correlate with private consciousness; logically, it could indeed. But logical possibility is irrelevant in this context, as a great many nonsensical things are logically possible, such as the Flying Spaghetti Monster. The question is whether we have _good reasons_ to entertain the hypothesis that, unlike today's computers, more complex future computers could become conscious. The answer is none, as I elaborated on in both essays. If you think otherwise, the burden is on you to provide these reasons. So drop the logical possibility thing, it's not the point in contention. Regarding our not knowing what in biology allows it to have private consciousness, in the context of analytic idealism you are asking the wrong question: biology is what private consciousness _looks like_, just as clots are what coagulation looks like. Biology doesn't generate private consciousness--if it did, your question would be more apt--it is, instead, merely its appearance. Do we have good reasons to think that private consciousness could look like silicon computers as well? None whatsoever. If you disagree, list the reasons (either way, I won't react anymore, too many things to do, and you are looking for reasons to continue to believe in what you want to believe, so this discussion can never be productive in any case).

    5. Tom isn't arguing that it is just a logical possibility, he expects it will actually happen and he has reasons to think that. And again, they are not reasons of complexity. He agrees with you that more complexity won't magically make a computer conscious.

      Analytic idealism doesn't have a good reason to think a computer could be conscious; physicalism's reasons for why they could be may be lunacy; but Tom's version of idealism does have a reason, and he has provided it.

      I wasn't suggesting biology generates consciousness, I was using the term 'emerge' as you use that, I believe, in the IAI article. The point is the same: if we do not have a reason why other-minds look like biology (other than, they have to look like something), then we don't have a reason why other-minds couldn't look like something else.

      If we have a reason why other-minds could look like biology, and that same reason allows for other-minds to look like non-biology, that isn't lunacy.

    6. And I don't have a belief that computers will be conscious, I am merely pointing out that two different forms of idealism can come to a different conclusion about this topic, both for good reason. One of them doesn't have to be lunacy.

    7. You argue that more complex computers won’t generate consciousness - agreed.

      You argue that a logical possibility alone isn’t a good reason to believe something - agreed.

      You argue that different types of consciousness look like different things - agreed.

      You are also arguing that there won’t be another type of consciousness that would look like any sort of computer because computers don’t look like any sort of consciousness now and as any future computers will still fundamentally be computers, there is no reason to expect this to change. This is a good reason, however an alternative to this is argued by Tom.

      Tom’s version of idealism allows for other-minds to look like different things. This world is one of many consciousness-based virtual realities (VRs). This world is not what mind-at-large looks like. Biology evolved according to the rule set of this VR.

      Consciousness localises outside of VRs. It can use avatars within VRs to experience those VRs. Biology is one option for an avatar; under certain conditions, certain computers can also be an option to be an avatar.

      Tom explains the conditions under which certain types of computers could also be used as avatars. The consciousness of a player of a computer avatar is likely to be different to the consciousness of a player of a biological avatar.

      These aren't 'it is logically possible that' arguments, they are arguments based on his extensive experience of consciousness, other VRs and probable futures and our consistent with his overarching philosophy.

      There is no lunacy here, just a different form of idealism with a different conclusion regarding this topic.

      It maybe be lunacy for an analytic idealist or a physicalist to believe computers can be conscious, that doesn’t mean that under another form of idealism there can’t be good reasons that they can be.

    8. Maybe if you have a second dialogue with Tom you could discuss his reasons :) Tom mentioned in the first dialogue that he thought you could both have a fun discussion about it. Your first dialogue with Tom is one of my favourites as I have benefitted so much from both of your philosophies.

    9. Bernardo et al:
      Regarding Metaconscious Awareness (which seems to be what all the fuss is about, and Rupert Spira's rip-off of John Levy's 1950's speil. (almost word for word in Rupert's teaching. (no offense,I've seen too many 'road-Qilz) .And, Bernardo, I too, love YiJing!

      OF MAN

  10. AI runs on standard computer which are equivalent to a Turing Machine (even if the latter would produces the same answers way more slow). I dare anyone to say that a Turing Machine is conscious.

  11. 1) If something is behaves consciously.....True
    2) If something behaves consciously...therefore is conscious...False or unproveable