If you watched the debate live, you know that, at the very end, I wanted to reply to a point made by Susan but couldn’t, since we ran out of time. The goal of this essay is to put my reply on the record in writing, so to take it out of my system. Before I do that, however, I need to give some context to those who didn’t watch the debate live and don’t have a subscription to the IAI to watch it before reading this essay. If you did watch the debate, you can skip ahead to the section ‘My missing reply.”
Context
In a nutshell, my position is that we have no reason to believe that silicon computers will ever become conscious. I cannot refute the hypothesis categorically, but then again, I cannot categorically refute the hypothesis of the Flying Spaghetti Monster either, as the latter is logically coherent. Appeals to logical coherence mean as little in the conscious AI debate as they do in the Flying Spaghetti Monster context. The important point is not what is logically coherent or what can be categorically refuted, but what hypothesis we have good reasons to entertain.
Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.
The isomorphism between AI computers and biological brains is only found at very high levels of purely conceptual abstraction, far away from empirical reality, in which disembodied—i.e. medium-independent—patterns of information flow are compared. Therefore, to believe in conscious AI one has to arbitrarily dismiss all the dissimilarities at more concrete levels, and then—equally arbitrarily—choose to take into account only a very high level of abstraction where some vague similarities can be found. To me, this constitutes an expression of mere wishful thinking, ungrounded in reason or evidence.
Towards the end of the debate I touched on an analogy. Those who believe in conscious AI tend to ask the following rhetorical question to make their point: “If brains can produce consciousness, why can’t computers do so as well?” As an idealist, I reject the claim that brains produce consciousness to begin with but, for the sake of focusing on the point in contention, I choose to interpret the question in the following way: “If brains are correlated with private conscious inner life, why can’t computers be so as well?” The question I raised towards the end of the debate was an answer to the aforementioned rhetoric: if birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well? The point of this equally rhetorical question, of course, is to highlight the fact that two dissimilar things—birds and humans—simply do not share every property or function (why should they?). So why should brains and computers do?
Susan then took my analogy and gave it a different spin, taking it beyond the intended context and limits (which is the perennial problem with analogies): she pointed out that, if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird. Her point was that one phenomenon—in this case, flight—can have multiple instantiations in nature, in different substrates—namely, a bird and an airplane. So although silicon computers are different from biology, in principle both could instantiate the phenomenon of private conscious inner life. This is a point of logic that I wanted to react to at the end of the debate, but didn’t have time to.
My missing reply
Here’s what I wanted to say at the end of the debate: indeed, we are not logically forced to limit the instantiations of private conscious inner life to a biological substrate alone. But this isn’t the point, as there are a great many silly hypotheses that are also logically—and even physically—coherent, yet obviously shouldn’t be entertained at all (such as the Flying Spaghetti Monster, or that there is a 19th-century teapot in the orbit of Saturn). The real point is whether we have good reasons to take seriously the hypothesis that private consciousness can correlate with silicon computers. Does the analogy of flight—namely, that airplanes and birds are different but nonetheless can both fly, so private consciousness could in principle be instantiated on both biological and non-biological substrates—provide us with good reasons to think that AI computers can become conscious in the future?
It may sound perfectly reasonable to say that it does, but—and here is the important point—if so, then the same reasoning applies to non-AI computers that exist already today, for the underlying substrate (namely, conducting metal, dielectric oxide and doped semiconducting silicon) and basic functional principles (data processing through electric charge movement) are the same in all cases. There is no fundamental difference between today's 'dumb' computers and the complex AI projected for the future. AI algorithms run on parallel information processing cores of the kind we have had for many years in our PCs (specifically, in the graphics cards therein), just more, faster, more interconnected cores, executing instructions in different orders (i.e. different software). As per the so-called ‘hard problem of consciousness,’ it is at least very difficult to see what miracle could make instructions executed in different orders, or more and faster components of the same kind, lead to the extraordinary and intrinsically discontinuous jump from unconsciousness to consciousness. The onus of argument here is on the believers, not the skeptics.
Even new, emerging computer architectures, such as neuromorphic processors, are essentially CMOS (or similar, using philosophically equivalent process technologies) devices moving electric charges around, just like their predecessors. To point out that these new architectures are analog, instead of digital, doesn’t help either: digital computers move charges around just like their analog counterparts; the only difference is in how information arising from those charge movements is interpreted. Namely, the microswitches in digital computers apply a threshold to the amount of charge before deciding its meaning, while analog computers don’t. But beyond this interpretational step—trivial for the purposes of the point in contention—both analog and digital computers embody essentially the same substrate. Moreover, the operation of both is based on the flow of electric charges along metal traces and the storage of charges in charge-holding circuits (i.e. memories).
So, if you grant Susan’s point that there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious (including the computer or phone you are using to read these words). The reason is two-fold: first, the substrate of today’s ‘dumb’ computers is the same as that of advanced AI computers (in both cases, charges move around in metal and silicon substrates); second, whatever change in organization or functionality happens in future CMOS or similar devices, such changes are philosophically trivial for the point in contention, as they cannot in themselves account for the emergence of consciousness from unconsciousness (vis a vis the hard problem). If you are prepared to go this far in your fantastical hypothesizing, then turning off your phone may already be an act of murder.
Alternatively, if today’s computers aren’t plausibly conscious, then neither do we have good reasons to believe that future, advanced AI computers will be, even if Susan’s flight analogy holds. For the point here is not one of logical—or even physical—possibility, but of natural plausibility instead. A 19th-century teapot in the orbit of Saturn is both logically and physically possible (aliens could have come to Earth in the 19th-century, stolen the teapot from someone’s dining room, and then dumped it in the vicinity of Saturn on their way back home, after which the unfortunate teapot got captured by Saturn’s gravity field), but naturally implausible to the point of being dismissible.
Are water pipes conscious too?
You see, everything a computer does can, in principle, be done with pipes, pressure valves and water. The pipes play the role of electrical conduits, or traces; the pressure valves play the role of switches, or transistors; and the water plays the role of electricity. Ohm’s Law—the fundamental rule for determining the behavior of electric circuits—maps one-on-one to water pressure and flow relations. Indeed, the reason why we build computers with silicon and electricity, instead of PVC pipes and water, is that the former are much, much smaller and cheaper to make. Present-day computer chips have tens of billions of transistors, and an even greater number of individual traces. Can you imagine the size and cost of a water-based computer comprising tens of billions of pipes and pressure valves? Can you imagine the amount of energy required to pump water through it? You wouldn't be able to afford it or carry it in your pocket. That’s the sole reason why we compute with electricity, instead of water (it also helps that silicon is one of the most abundant elements on Earth, found in the form of sand). There is nothing fundamentally different between a pipe-valve-water computer and an electronic one, from the perspective of computation. Electricity is not a magical or unique substrate for computation, but merely a convenient one. A wooden tool called an 'abacus' also computes.
With this in mind, ask yourself: do we have good reasons to believe that a system made of pipes, valves and water correlates with private conscious inner life the way your brain does? Is there something it is like to be the pipes, valves and water put together? If you answer ‘yes’ to this question, then logic forces you to start wondering if your house’s sanitation system—with its pipes, valves and water—is conscious, and whether it is murder to turn off the mains valve when you go on vacation. For the only difference between your house’s sanitation system and my imaginary water-based computer is one of number—namely, how many pipes, how many valves, how many liters of water—not of kind or essence. As a matter of fact, the typical home sanitation system implements the functionality of about 5 to 10 transistors.
You can, of course, choose to believe that the numbers actually matter. In other words, you may entertain the hypothesis that although a simple, small home sanitation system is unconscious, if you keep on adding pipes, valves and water to it, at some point the system will suddenly make the jump to being conscious. But this is magical thinking. You'd have to ask yourself the question: how, precisely, does the mere addition of more of the same pipes, valves and water, lead to the magical jump to conscious inner life? Unless you have an explicit and coherent answer to this question, you are merely engaging in hand waving, self-deception, and hiding behind vague complexity.
Conclusion
That there can logically be instantiations of private conscious inner life on different substrates does not provide reason to believe that, although ‘dumb’ computers aren’t conscious, more complex computers in the future, with more transistors and running more complex software, will become conscious. The key problem for those who believe in conscious AI is how and why this transition from unconsciousness to consciousness should ever take place. Susan’s flight analogy does not help here, as it merely argues for the logical possibility of such transition, without saying anything about its natural plausibility.
If, like me, you believe that ‘dumb’ computers today—automata that mechanically follow a list of commands—aren’t conscious, then Susan’s flight analogy gives you no reason to take seriously the hypothesis that future computers—equally made of silicon and moving electric charges around—will become conscious. That they will run more sophisticated AI software only means that they will execute, just as blindly and mechanically as before, a different list of commands. What those computers will be able to do can be done with pipes, pressure valves and water, even though the latter isn't practical.
It is very difficult—if at all possible—to definitely refute someone whose view is agnosticism, or a wait-and-see attitude, since that isn’t a definite position to begin with. So what is there to argue against? “Well, we just don’t know, do we?” is a catch-all reply that can be issued in face of any criticism, regardless of how well articulated it is, for what can we humans—monkeys running around a space rock for less than 300 thousand years—know for sure to begin with? Yet, I feel that I should nonetheless keep on trying to argue against this form of open-mindedness, for there is a point where it opens the doors to utter and pernicious nonsense.
You see, I could coherently pronounce my open-mindedness about the Flying Spaghetti Monster, for we just don’t know for sure whether it exists, do we? For all I know, there is a noodly monster floating around in space, in a higher dimension invisible to us, moving the planets around their orbits with its—invisible—noodly appendages. The evidence is surely consistent with this hypothesis: the planets do move around their orbits, even though no force is imparted on them through visible physical contact. Even stronger, the hypothesis does seem to even explain our observations of planetary movements. And there is nothing logically wrong, or even physically refutable, with it either. So, what do we know? Maybe the hypothesis is right, and thus we should remain open-minded and not arbitrarily dismiss the Monster. Let us all wear an upside-down pasta strainer on our heads! Do you see the point?
No, we have no good reason to believe in conscious AI. This is a fantasy unsupported by reason or evidence. Epistemically, it’s right up there in the general vicinity of the Flying Spaghetti Monster. Entertaining conscious AI seriously is counterproductive; it legitimizes the expenditure of scarce human resources—including tax-payer money—on problems that do not exist, such as the ethics and rights of AI entities. It contaminates our culture by distorting our natural sense of plausibility and conflating reality with (bad) fiction. AIs are complex tools, like a nuclear power plant is a complex tool. We should take safety precautions about AIs just as we take safety precautions about nuclear power plants, without having ethics discussions about the rights of power plants. Anything beyond this is just fantastical nonsense and should be treated as such.
Allow me to vent a little more…
I believe one of the unfortunate factors that contribute to the pernicious fiction of conscious AI today is the utter lack of familiarity, even—well, particularly—among highly educated computer scientists, with what computers actually are, how they actually work, and how they are actually built. Generations have now come out of computer science school knowing how to use a voluminous hierarchy of pre-built software libraries and tooling—meant precisely to insulate them from the dirty details we call reality—but not having the faintest clue about how to design and build a computer. These are our ‘computer experts’ today: they are mere power users of computers, knowing precious little about the latter's inner workings. They think entirely in a realm of conceptual abstraction, enabled by tooling and disconnected from the (electrical) reality of integrated circuits (ICs) and hardware. For them, since the CPU—the Central Processing Unit, the computer's 'brain'—is a mysterious black box anyway, it's easy to project all their fantasies onto it, thereby filling the vacuum left open by a lack of understanding with wishful, magical thinking. The psychology at play here has been so common throughout human history that we can consider it banal. On the other hand, those who do know how to build a CPU and a computer as a whole, such as Federico Faggin, father of the microprocessor and inventor of silicon gate technology, pooh-pooh ‘conscious AI’ every bit as much as I do.
Having worked on the design and manufacture of computer ICs for over two decades, I estimate that perhaps only about 2000 people alive today know how to start from sand and end up with a working computer. This is extremely worrisome, for if a cataclysm wipes out our technical literature together with those 2000 people tomorrow, we will not know how to re-boot our technological infrastructure. It is also worrisome in that it opens the door to the foolishness of conscious AI, which is now being actively peddled by computer science lunatics with the letters ‘PhD’ suffixing their names. After all, a PhD in conceptual abstraction is far from a PhD in reality. (On an aside, PhD lunacy is much more dangerous than garden-variety lunacy, for the average person on the streets takes the former, but not the latter, seriously. With two PhDs myself, I may know a thing or two about how lunatics can get PhDs.)
But instead of just criticizing and pointing to problems, I’ve decided to try and do something about it, modest and insignificant as my contributions may be. For almost three years now, I have been designing—entirely from scratch—one complete and working computer per year. I put all the plans, documentation and software up online, fully open source, for anyone to peruse. I hope this makes a contribution to educating people about computers; particularly those computer scientists who have achieved lift-off and now work without friction with reality. Anyone can download those plans—which include gate-level details for how to build the associated custom ICs—and build their computers from scratch. The designs were made to not only work properly, but to also be easy to understand and follow. If I can bring one or two computer scientists back to the solid ground of reality with those designs, I’ll consider my efforts successful.