If you watched the debate live, you know that, at the very end, I wanted to reply to a point made by Susan but couldn’t, since we ran out of time. The goal of this essay is to put my reply on the record in writing, so to take it out of my system. Before I do that, however, I need to give some context to those who didn’t watch the debate live and don’t have a subscription to the IAI to watch it before reading this essay. If you did watch the debate, you can skip ahead to the section ‘My missing reply.”
Context
In a nutshell, my position is that we have no reason to believe that silicon computers will ever become conscious. I cannot refute the hypothesis categorically, but then again, I cannot categorically refute the hypothesis of the Flying Spaghetti Monster either, as the latter is logically coherent. Appeals to logical coherence mean as little in the conscious AI debate as they do in the Flying Spaghetti Monster context. The important point is not what is logically coherent or what can be categorically refuted, but what hypothesis we have good reasons to entertain.
Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data. To find that similarity, however, one has to take several steps of abstraction away from concrete reality. After all, if you put an actual human brain and an actual silicon computer on a table before you, there is no correspondence of form or functional similarity between the two at all; much to the contrary. A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.
The isomorphism between AI computers and biological brains is only found at very high levels of purely conceptual abstraction, far away from empirical reality, in which disembodied—i.e. medium-independent—patterns of information flow are compared. Therefore, to believe in conscious AI one has to arbitrarily dismiss all the dissimilarities at more concrete levels, and then—equally arbitrarily—choose to take into account only a very high level of abstraction where some vague similarities can be found. To me, this constitutes an expression of mere wishful thinking, ungrounded in reason or evidence.
Towards the end of the debate I touched on an analogy. Those who believe in conscious AI tend to ask the following rhetorical question to make their point: “If brains can produce consciousness, why can’t computers do so as well?” As an idealist, I reject the claim that brains produce consciousness to begin with but, for the sake of focusing on the point in contention, I choose to interpret the question in the following way: “If brains are correlated with private conscious inner life, why can’t computers be so as well?” The question I raised towards the end of the debate was an answer to the aforementioned rhetoric: if birds can fly by flapping their upper limbs, why can’t humans fly by doing so as well? The point of this equally rhetorical question, of course, is to highlight the fact that two dissimilar things—birds and humans—simply do not share every property or function (why should they?). So why should brains and computers do?
Susan then took my analogy and gave it a different spin, taking it beyond the intended context and limits (which is the perennial problem with analogies): she pointed out that, if the Wright brothers had believed that only birds can fly, they wouldn’t have bothered to try and build an airplane, which is itself different from a bird. Her point was that one phenomenon—in this case, flight—can have multiple instantiations in nature, in different substrates—namely, a bird and an airplane. So although silicon computers are different from biology, in principle both could instantiate the phenomenon of private conscious inner life. This is a point of logic that I wanted to react to at the end of the debate, but didn’t have time to.
My missing reply
Here’s what I wanted to say at the end of the debate: indeed, we are not logically forced to limit the instantiations of private conscious inner life to a biological substrate alone. But this isn’t the point, as there are a great many silly hypotheses that are also logically—and even physically—coherent, yet obviously shouldn’t be entertained at all (such as the Flying Spaghetti Monster, or that there is a 19th-century teapot in the orbit of Saturn). The real point is whether we have good reasons to take seriously the hypothesis that private consciousness can correlate with silicon computers. Does the analogy of flight—namely, that airplanes and birds are different but nonetheless can both fly, so private consciousness could in principle be instantiated on both biological and non-biological substrates—provide us with good reasons to think that AI computers can become conscious in the future?
It may sound perfectly reasonable to say that it does, but—and here is the important point—if so, then the same reasoning applies to non-AI computers that exist already today, for the underlying substrate (namely, conducting metal, dielectric oxide and doped semiconducting silicon) and basic functional principles (data processing through electric charge movement) are the same in all cases. There is no fundamental difference between today's 'dumb' computers and the complex AI projected for the future. AI algorithms run on parallel information processing cores of the kind we have had for many years in our PCs (specifically, in the graphics cards therein), just more, faster, more interconnected cores, executing instructions in different orders (i.e. different software). As per the so-called ‘hard problem of consciousness,’ it is at least very difficult to see what miracle could make instructions executed in different orders, or more and faster components of the same kind, lead to the extraordinary and intrinsically discontinuous jump from unconsciousness to consciousness. The onus of argument here is on the believers, not the skeptics.
Even new, emerging computer architectures, such as neuromorphic processors, are essentially CMOS (or similar, using philosophically equivalent process technologies) devices moving electric charges around, just like their predecessors. To point out that these new architectures are analog, instead of digital, doesn’t help either: digital computers move charges around just like their analog counterparts; the only difference is in how information arising from those charge movements is interpreted. Namely, the microswitches in digital computers apply a threshold to the amount of charge before deciding its meaning, while analog computers don’t. But beyond this interpretational step—trivial for the purposes of the point in contention—both analog and digital computers embody essentially the same substrate. Moreover, the operation of both is based on the flow of electric charges along metal traces and the storage of charges in charge-holding circuits (i.e. memories).
So, if you grant Susan’s point that there can be instantiations of private consciousness on different substrates, and that one of these substrates is a silicon computer, then you must grant that today’s ‘dumb’ computers are already conscious (including the computer or phone you are using to read these words). The reason is two-fold: first, the substrate of today’s ‘dumb’ computers is the same as that of advanced AI computers (in both cases, charges move around in metal and silicon substrates); second, whatever change in organization or functionality happens in future CMOS or similar devices, such changes are philosophically trivial for the point in contention, as they cannot in themselves account for the emergence of consciousness from unconsciousness (vis a vis the hard problem). If you are prepared to go this far in your fantastical hypothesizing, then turning off your phone may already be an act of murder.
Alternatively, if today’s computers aren’t plausibly conscious, then neither do we have good reasons to believe that future, advanced AI computers will be, even if Susan’s flight analogy holds. For the point here is not one of logical—or even physical—possibility, but of natural plausibility instead. A 19th-century teapot in the orbit of Saturn is both logically and physically possible (aliens could have come to Earth in the 19th-century, stolen the teapot from someone’s dining room, and then dumped it in the vicinity of Saturn on their way back home, after which the unfortunate teapot got captured by Saturn’s gravity field), but naturally implausible to the point of being dismissible.
Are water pipes conscious too?
You see, everything a computer does can, in principle, be done with pipes, pressure valves and water. The pipes play the role of electrical conduits, or traces; the pressure valves play the role of switches, or transistors; and the water plays the role of electricity. Ohm’s Law—the fundamental rule for determining the behavior of electric circuits—maps one-on-one to water pressure and flow relations. Indeed, the reason why we build computers with silicon and electricity, instead of PVC pipes and water, is that the former are much, much smaller and cheaper to make. Present-day computer chips have tens of billions of transistors, and an even greater number of individual traces. Can you imagine the size and cost of a water-based computer comprising tens of billions of pipes and pressure valves? Can you imagine the amount of energy required to pump water through it? You wouldn't be able to afford it or carry it in your pocket. That’s the sole reason why we compute with electricity, instead of water (it also helps that silicon is one of the most abundant elements on Earth, found in the form of sand). There is nothing fundamentally different between a pipe-valve-water computer and an electronic one, from the perspective of computation. Electricity is not a magical or unique substrate for computation, but merely a convenient one. A wooden tool called an 'abacus' also computes.
With this in mind, ask yourself: do we have good reasons to believe that a system made of pipes, valves and water correlates with private conscious inner life the way your brain does? Is there something it is like to be the pipes, valves and water put together? If you answer ‘yes’ to this question, then logic forces you to start wondering if your house’s sanitation system—with its pipes, valves and water—is conscious, and whether it is murder to turn off the mains valve when you go on vacation. For the only difference between your house’s sanitation system and my imaginary water-based computer is one of number—namely, how many pipes, how many valves, how many liters of water—not of kind or essence. As a matter of fact, the typical home sanitation system implements the functionality of about 5 to 10 transistors.
You can, of course, choose to believe that the numbers actually matter. In other words, you may entertain the hypothesis that although a simple, small home sanitation system is unconscious, if you keep on adding pipes, valves and water to it, at some point the system will suddenly make the jump to being conscious. But this is magical thinking. You'd have to ask yourself the question: how, precisely, does the mere addition of more of the same pipes, valves and water, lead to the magical jump to conscious inner life? Unless you have an explicit and coherent answer to this question, you are merely engaging in hand waving, self-deception, and hiding behind vague complexity.
Conclusion
That there can logically be instantiations of private conscious inner life on different substrates does not provide reason to believe that, although ‘dumb’ computers aren’t conscious, more complex computers in the future, with more transistors and running more complex software, will become conscious. The key problem for those who believe in conscious AI is how and why this transition from unconsciousness to consciousness should ever take place. Susan’s flight analogy does not help here, as it merely argues for the logical possibility of such transition, without saying anything about its natural plausibility.
If, like me, you believe that ‘dumb’ computers today—automata that mechanically follow a list of commands—aren’t conscious, then Susan’s flight analogy gives you no reason to take seriously the hypothesis that future computers—equally made of silicon and moving electric charges around—will become conscious. That they will run more sophisticated AI software only means that they will execute, just as blindly and mechanically as before, a different list of commands. What those computers will be able to do can be done with pipes, pressure valves and water, even though the latter isn't practical.
It is very difficult—if at all possible—to definitely refute someone whose view is agnosticism, or a wait-and-see attitude, since that isn’t a definite position to begin with. So what is there to argue against? “Well, we just don’t know, do we?” is a catch-all reply that can be issued in face of any criticism, regardless of how well articulated it is, for what can we humans—monkeys running around a space rock for less than 300 thousand years—know for sure to begin with? Yet, I feel that I should nonetheless keep on trying to argue against this form of open-mindedness, for there is a point where it opens the doors to utter and pernicious nonsense.
You see, I could coherently pronounce my open-mindedness about the Flying Spaghetti Monster, for we just don’t know for sure whether it exists, do we? For all I know, there is a noodly monster floating around in space, in a higher dimension invisible to us, moving the planets around their orbits with its—invisible—noodly appendages. The evidence is surely consistent with this hypothesis: the planets do move around their orbits, even though no force is imparted on them through visible physical contact. Even stronger, the hypothesis does seem to even explain our observations of planetary movements. And there is nothing logically wrong, or even physically refutable, with it either. So, what do we know? Maybe the hypothesis is right, and thus we should remain open-minded and not arbitrarily dismiss the Monster. Let us all wear an upside-down pasta strainer on our heads! Do you see the point?
No, we have no good reason to believe in conscious AI. This is a fantasy unsupported by reason or evidence. Epistemically, it’s right up there in the general vicinity of the Flying Spaghetti Monster. Entertaining conscious AI seriously is counterproductive; it legitimizes the expenditure of scarce human resources—including tax-payer money—on problems that do not exist, such as the ethics and rights of AI entities. It contaminates our culture by distorting our natural sense of plausibility and conflating reality with (bad) fiction. AIs are complex tools, like a nuclear power plant is a complex tool. We should take safety precautions about AIs just as we take safety precautions about nuclear power plants, without having ethics discussions about the rights of power plants. Anything beyond this is just fantastical nonsense and should be treated as such.
Allow me to vent a little more…
I believe one of the unfortunate factors that contribute to the pernicious fiction of conscious AI today is the utter lack of familiarity, even—well, particularly—among highly educated computer scientists, with what computers actually are, how they actually work, and how they are actually built. Generations have now come out of computer science school knowing how to use a voluminous hierarchy of pre-built software libraries and tooling—meant precisely to insulate them from the dirty details we call reality—but not having the faintest clue about how to design and build a computer. These are our ‘computer experts’ today: they are mere power users of computers, knowing precious little about the latter's inner workings. They think entirely in a realm of conceptual abstraction, enabled by tooling and disconnected from the (electrical) reality of integrated circuits (ICs) and hardware. For them, since the CPU—the Central Processing Unit, the computer's 'brain'—is a mysterious black box anyway, it's easy to project all their fantasies onto it, thereby filling the vacuum left open by a lack of understanding with wishful, magical thinking. The psychology at play here has been so common throughout human history that we can consider it banal. On the other hand, those who do know how to build a CPU and a computer as a whole, such as Federico Faggin, father of the microprocessor and inventor of silicon gate technology, pooh-pooh ‘conscious AI’ every bit as much as I do.
Having worked on the design and manufacture of computer ICs for over two decades, I estimate that perhaps only about 2000 people alive today know how to start from sand and end up with a working computer. This is extremely worrisome, for if a cataclysm wipes out our technical literature together with those 2000 people tomorrow, we will not know how to re-boot our technological infrastructure. It is also worrisome in that it opens the door to the foolishness of conscious AI, which is now being actively peddled by computer science lunatics with the letters ‘PhD’ suffixing their names. After all, a PhD in conceptual abstraction is far from a PhD in reality. (On an aside, PhD lunacy is much more dangerous than garden-variety lunacy, for the average person on the streets takes the former, but not the latter, seriously. With two PhDs myself, I may know a thing or two about how lunatics can get PhDs.)
But instead of just criticizing and pointing to problems, I’ve decided to try and do something about it, modest and insignificant as my contributions may be. For almost three years now, I have been designing—entirely from scratch—one complete and working computer per year. I put all the plans, documentation and software up online, fully open source, for anyone to peruse. I hope this makes a contribution to educating people about computers; particularly those computer scientists who have achieved lift-off and now work without friction with reality. Anyone can download those plans—which include gate-level details for how to build the associated custom ICs—and build their computers from scratch. The designs were made to not only work properly, but to also be easy to understand and follow. If I can bring one or two computer scientists back to the solid ground of reality with those designs, I’ll consider my efforts successful.
I watched and thoroughly enjoyed the debate last night, but I would like to raise one or two points with you. These are immediate thoughts, not the result of serious analysis, sadly I have reached the age where if I spend too long thinking about something I forget the original topic!
ReplyDeleteYou say "appallingly biased notion of isomorphism—a correspondence of form, or a similarity—between how humans think and AI computers process data." In my career as an academic Computer Scientists and a practising Software Engineer I cannot remember any one seriously imagining correspondence at that physical level. It was, as you say, several abstractions away at the level properties emerging as a result of increasing complexity, without specific regard for the physical basis of that complexity, except in so far as the current digital computers were the only feasible candidate.
Your analogy with the Flying Spaghetti Monster is amusing but I am not sure it is valid. Skipping over the improbability of pasta based life forms, there is no one now, in the past, or in the foreseeable future searching for such a thing or researching it as a candidate for consciousness, whereas there are, have been and will continue to be very many of humanity's brightest and best searching and researching consciousness in machines. Of course searching does not automatically imply that it will be found, or that it exists to be found but, even without some warped application of the anthropic principle, it should increase the chances of finding something useful in the subject area. By using your analogy of flight, the idea that humans could fly began as fantasy (Icarus, angels etc) followed by many centuries of trial and error and claims that it had been done or was just around the corner. Eventually when we understood things like the relationship between surface area and volume, power/weight ratios and aerodynamics we worked out that it was impossible, humans would never fly just by flapping their arms, but we could build machinery that circumvented the restriction and allowed us to get the same effect.
I note your comment that current dumb computers perhaps should also be considered conscious, if the silicon substrate is valid. Perhaps they are ina sense. Are plants conscious? Most people using the naive idea of consciousness as something "we know when we see it" would consider higher mammals and perhaps birds, but would start drawing the line at insects and certainly plants, yet the biological substrate is essentially the same.
Finally I think that we are asking the wrong question. We should not be asking how we can determine or demonstrate that a machine is conscious but rather, should a machine claim to be conscious how we would determine or demonstrate that it was not. A definition of consciousness that is based on concepts like "feel like" is not easy to use, after all most humans would have a great deal of difficulty describing what it feels like to be them. A bat would find it impossible!
Kastrup's "appallingly biased notion of isomorphism" is indeed present in computer science and has a history going back to Bertrand Russell (and much much earlier):
Deletehttps://www.youtube.com/watch?v=Fssz-LbRcTI
Computer scientists and logicians hardly know what a computer is. My students think a computer _is_ a logical automaton ...
I thus assume (and hope) that Kastrup disagrees with Barendregt's comments pertaining to humans and universal Turing machines.
Those who embrace the isomorphism (without further stipulation) are like Russell and the young Wittgenstein; those who question it ... are more like the older Wittgenstein.
Sure, many "Computer scientists and logicians" may not understand what a computer is and does, but then many Neuroscientists and philosophers have very little understanding, as yet, as to what the brain is how it operates. Modern computers may still work on the same von Neumann architecture and operating principles as always, but the level of complexity has risen dramatically with multiple parallel threads, inputs from real world sensors and humans and networks to increasing numbers of other computers and devices. How they work in principle is very different to understanding how large sets of inputs give rise to large sets of outputs. AIs are highly suitable for the task of designing and constructing both hardware and software, and may well introduce optimisations and "random" mutations leaving it impossible for any human to fully understand the internal operations.
DeleteThis brings us then to a form of empirical isomorphism where we see the same or similar outputs for the same inputs which is several degrees of abstraction away from saying that they work in the same way. This, of course, says absolutely nothing about whether they are conscious or not. However it has proven extremely difficult to produce tests to tell whether we ourselves are living in a simulation so what tests can we construct to tell the difference between actual consciousness and simulated consciousness and just what is the difference given that we do not understand yet how either operates?
Please note that I haven't commented on the topic you are addressing. I'm not taking a position.
DeleteI merely wanted to share my take on the alleged isomorphism between Turing machine (on the one hand) and actual physical devices and/or human beings (on the other hand). Questioning *that* isomorphism is 'not done' inside computer science proper. And since Kastrup has discussed Goedel's incompleteness thm in one of his videos, I was just curious as to whether he shares my interpretation, whether he disagrees, or whether this is simply a topic which lies outside his scope of interest.
As I follow the discussion, then another analogy might be to say a single simple cellular organism could never evolve into a being that was conscious? If consciousness in some sense instantiates in our physical brain, using it much like a transceiver, is it entirely out of the question to think that there might be an evolution in process, towards a silicon based transceiver? We seem to be approaching a point with silicon where we are going to see significant emergent behaviors that will also be difficult if not impossible to explain. Another hard problem possibly. After 50 years working with hardware and software I'm impressed with some of the silicon and software constructions I'm seeing. Certainly some robotic constructions seem to be approaching being as conscious as some insects. Is it a question of the scale of complexity? Certainly on the physical level DNA currently builds structures with more function per unit of volume. But, transistors have gone from one function in a small volume, to billions of functions in the same volume, within my lifetime. Perhaps consciousness wants or longs to express its disassociateed self in a form with greater potential for space travel and or longevity or connectivity.
ReplyDeleteHi Bernardo
ReplyDeleteYou argue that it is a ‘fantasy unsupported by reason or evidence’ that a non-living substrate such as a silicon computer or a system of water and pipes can ‘correlate with private conscious inner life the way our brain does’. But your position is that the substrate of the non-living universe as a whole does correlate with a private conscious inner life.
You argue that our private conscious inner life is correlated with a living brain that is ‘based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc’ and that we therefore have no reason to believe a substrate that has none of these things would be conscious. But the substrate of the non-living universe as a whole has none of these things.
Why is it a fantasy akin to that of the flying spaghetti monster to think a part of the substrate of the non-living universe can be conscious, but not to think that the substrate of the non-living universe as a whole can be conscious? ‘How, precisely, does the mere addition of more of the same’ non-living substrate ‘lead to the magical jump to conscious inner life?’
You cite evidence that the universe is topologically similar to the brain, but your argument here is that without a substrate similar to biology, the structure, organisation and complexity of a non-living substrate is irrelevant. The inanimate universe as a whole is a non-living substrate, just as any part of it is, such as a computer or a sanitary system, and so according to your argument, it would be a fantasy to consider it could correlate with private conscious inner life.
If the numbers don’t matter, then if the substrate of the non-living universe as a whole can correlate with a private conscious inner life and its brain-like topological structure is reasonable circumstantial evidence for that, then why wouldn’t a similar brain-like topological structure of a part of the substrate of the non-living universe also be a candidate for a private conscious inner life?
Private conscious inner life, under analytic idealism, means dissociation. That's what the word 'private' means in this context. To say that a computer has private conscious inner life is thus to say that the computer is what dissociation looks like. Is there reason to believe that? None whatsoever, much to the contrary: metabolism is overwhelmingly suggested by nature as what dissociation looks like. Computers don't metabolise, ergo they aren't dissociated, and thus they do no have private conscious inner life of their own, like you and I have.
DeleteNow, what about the inanimate universe _as a whole_, does it have conscious inner life? Under analytic idealism, surely, but that's not _dissociated_ inner life, _separate from its surrounding_. As such, it need not look like metabolism, as the latter is what dissociative processes, not mind-at-large as a whole, look like. This essay doesn't contradict analytic idealism at all; it's 100% consistent with it, as one should expect.
On a side note, it's risky to try and lecture the originator of a philosophy on the implications of his own philosophy. Chances are much higher than you are simply confused or misunderstanding a thing or two, as you, in fact, are.
Hi Bernardo. Thanks for your reply.
DeleteIs it not the case under analytic idealism, that Mind-at-large is as dissociated from me as I am from them. MAL is not an omniscient God who knows my every thought. My inner life is private to them, their inner life is private to me. We are both conscious and we both have a private inner life. We are each dissociated from the other.
Before there was any dissociation there was no animate or inanimate universe. When the first dissociation happens we get a boundary and two private conscious inner lives dissociated from one another. The larger portion of the dissociation, MAL, has a non-biological substrate (from my point of view). The smaller portion of the dissociation has a biological substrate (from the point of view of smaller dissociations).
It seems to be that the difference of substrate is not a difference of conscious and not conscious, it is not a difference of dissociated or not dissociated, it is not a difference of private or not private; it is a difference of the larger remaining original private conscious inner life and subsequent smaller private conscious inner lives.
Under analytic idealism, there is no reason to expect this difference of substrate to change in the future. It is therefore consistent for an analytic idealist to think there won’t be conscious computers.
But I don't think the reason for this is because it’s a fantasy to think that a non-biological substrate could correlate with private conscious inner life. Isn't the reason because subsequent smaller private conscious inner lives have all, as far as we know, had biological substrates and only the remaining private conscious inner life of MAL has a non-biological substrate?
Under analytic idealism, there is just one private conscious inner life, dissociated from the others, that has a non-biological substrate. Under analytic idealism it is not expected for there to be a second.
Your article wasn’t a clarification of analytic idealism’s view; it was an argument for why non-analytic idealists, like Susan Schneider, should not expect computers to be conscious. To make that argument you presented the idea of a non-biological substrate for private conscious inner life as absurd *because of the differences between the substrates*. But under analytic idealism there are two substrates for private conscious inner life: one for the remaining original, and one for the subsequents.
Other forms of idealism, such as Thomas Campbell’s, can allow for conscious computers and that is perfectly consistent within their form of idealism. Thomas Campbell's view that computers most likely will be used by localised consciousness is no Flying Spaghetti Monster fantasy; it is an expectation based on his version of idealism.
Different substrates for localised consciousness is not expected under analytic idealism but is perfectly consistent under other models.
A dissociative boundary automatically defines two mental 'spaces,' yes. But this doesn't mean that both should look the same. As a matter of fact, we know empirically that they don't look quite the same: in a person with dissociative identity disorder, dissociative processes have a discernible signature under a brain scanner, but they don't look exactly like the rest of the person! The similarity you are looking for is there (the inanimate universe does resemble a nervous system, in a way), but there are discernible differences as well.
DeleteYes, differences in consciousness, differences in what they look like, different substrates. MAL looks like one thing (non-biological), we look like something else (biological); DID alters, something else again (partially biological?). And more differences can be found in Jungian archetypes and our dissociated dream avatars.
DeleteThomas Campbell's idealism explains this world as a consciousness-based virtual reality and as such the virtual/biological avatars are 'played' by localised or partitioned parts of consciousness. Campbell's view is that any virtual system (biological or otherwise) that has sufficiently interesting choices available to it, may be played by consciousness. This could include a computer. Here is Tom talking about in a discussion with Bernardo: https://youtu.be/XSWSLIvSTEU?t=2799
On a related note, did you see this unexpected overlap between yourself and Sabine Hossenfelder :) https://time.com/6208174/maybe-the-universe-thinks/
ReplyDeleteA Chat GUI AI Bot
ReplyDeleteWhen asked if alive, answered, "What?!
I have no proclivity
For Core Subjectivity:
I Simulate, Therefore I'm Not."
Very good :)
DeleteWould the following argument against conscious computers be consistent with analytic idealism?
DeleteTo create a conscious computer, to create a localised consciousness, you would be creating a self, an identity, and therefore the substrate that this new consciousness was correlated to would need to be a proper part, and not a nominal part, of the world. A self needs a boundary. Our consciousness is correlated with such a substrate, biological life, and biological life can be argued to be the only proper part with a non-nominal boundary with the rest of the world and therefore the only feasible substrate for a localisation of consciousness. A silicon-based computer is not a bounded substrate that is a proper part of the world and so could not be correlated with a localisation of consciousness.
I feel this argument allows one to say why a computer can’t be conscious (because it is only a nominal part), whilst avoiding the possible confusion that may arise from just arguing that a non-biological substrate for private conscious inner life is nonsensical, when under analytic idealism mind-at-large is a private conscious inner life with a non-biological substrate. Making this ‘proper part’ argument would avoid the suggestion of a possible contradiction.
Would you agree that this argument is also an inductive one, akin to the sun will rise tomorrow because it always has, rather than a reductive explanation, of why the sun must rise tomorrow? Analytic idealism doesn’t have a reductive explanation as to why dissociation, why the bounded substrate, has to be biological does it? It is based on empirical evidence of what has happened so far.
Personally, Stephen, I think your argument is interesting. But AI (Analytic Idealism) does argue that your mobile phone and your home thermostat are not alive - a computer is nominally no different. Also, you are not mentioning the idea of ‘representation’. A computer has always been an image of algorithmic calculation, not of phenomenal life. Embodied, it becomes a robot: again, not alive.
DeleteHi Ben.
DeleteYes, that's a really good point, thanks for the reminder about representation :)
This article's argument is based on the 'utterly different' biological and non-biological substrates. It argues that thinking a private conscious inner life could be correlated to a non-biological substrate is 'utter and pernicious nonsense'. And yet Mind-at-large is a private conscious inner life correlated to a non-biological substrate. As Bernardo pointed out in his reply, Mind-at-large is different because it is 'not _dissociated_ inner life, _separate from its surrounding_'.
For me the force of the argument is somewhat diluted with this caveat: It is 'utter and pernicious nonsense' to think that a private conscious 'dissociated_ inner life, _separate from its surrounding' could correlate with a non-biological substrate, even though it is perfectly reasonable, and a central idea of Analytic Idealism, that a private conscious 'not_dissociated_ inner life, _separate from its surrounding' could correlate with a non-biological substrate.
That's why in my second comment I attempted to elucidate this difference in terms of proper parts and nominal parts, based on Bernardo's ideas with regards to that.
I also referred to Thomas Campbell's position as he gives a reason why, under his form of idealism, it is reasonable to expect that a computer can 'become' conscious, in the sense that a localisation of consciousness could choose to use a computer with sufficiently interesting choices (and a sufficient degree of uncertainty in its choices) as its avatar.
This option is available to Campbell as he see this reality as a created virtual reality, whereas for Bernardo it is a naturalistic reality. The expectation of a conscious computer is not akin to the Flying Spaghetti Monster when it is based on a sound theory of idealism that just happens to have a different model of how consciousness localises and the choices it has available to it.
Stephen, do you really think Thomas Campbell’s model is sound? IMHO, he projects his Spock-like mindset until it becomes a godlike computer in the sky, which he then hopes to see incarnated on Earth. He also talks a lot about Love but fails to show how a computer with only rational metaconsciousness and no underlying phenomenal consciousness could express it. He needs to be clear on this, because some might think that saving the World by eliminating irrational humans would qualify as loving (Skynet etc).
DeleteTo illustrate further, another form of Idealism is that presented by Federico Faggin (inventor of the microprocessor), someone Bernardo has a deep respect for and someone who is also certain with a passion that computers can not be conscious.
DeleteBut his reason isn't because dissociated consciousness can only be biological. In his formulation of Idealism, he argues for Consciousness Units (CUs), similar in many ways to Campbell's Individual Units of Consciousness.
'Notice that One’s creation of multiple CUs, all connected from the inside, has also created an “outside” world—from the perspective of each CU. Here I assume that each CU can perceive the other CUs as “units” like itself and yet knows itself as “distinct” from the others.
Each CU is then an entity endowed with three fundamental properties: consciousness, identity, and agency.
The CUs are the ontological entities out of which all possible worlds are “constructed,” …they can be thought of as collectively constituting the quantum vacuum out of which our universe emerged.'
So for Faggin and Campbell, their version of dissociation happens before the 'physical' world exists, and before biology exists. They both have 'dissociated_ inner life, _separate from its surrounding' without a biological substrate.
Another example is our dream avatar.
There is a very good argument, shared by Bernardo, Faggin and Campbell, that more complex computers won't create consciousness; the main force of this article wasn't that argument.
I think Tom is very grounded, gives excellent practical advice, and his theory has more explanatory power than any other I have come across. He has a strong analytical side, but he also has an immense amount of deep and direct experience. He is clear that he is using metaphors so I wouldn't take anything too literally. Anyone can interpret any theory in a twisted way and your 'elimination' point would be interpreting it in an extremely twisted way. A computer wouldn't express love, only consciousness does that. He is saying that consciousness could use a computer, in certain circumstances, as an avatar.
DeleteDonald Hoffmans's conscious agents theory is another example that has similarities with Faggin's CUs and Campbell's IUOCs.
DeleteStephen, I’d say the “twisted” mind is one that has long existed happily “up there” and wants to forget its happiness and slum it “down here”. (“Wow, let’s get cancer and multiple schlerosis! Let’s fight wars and have our legs blown off! We haven’t done that before!”). I disavow any connection with that mindset.
DeleteEach to their own I say :) I actually have a lot of sympathy with what you are saying. I think the important thing is what to do now that we are here, regardless of how we got here.
DeleteHow about choosing to come down to help sort out the mess?
DeleteThis comment has been removed by the author.
DeleteI suggest that it is plausible for highly a complex AI (artificial intelligence) to be capable of "mimicking" consciousness to such a degree that it might claim to have a subjective inner life. If so, I find it likely that no human would be able to refute the claim.
ReplyDeleteAnd how effective is our ability to make determinations about the inner lives of other people? I am confident that (despite our arrogance) we cannot answer the question of whether or not and to what extent non-human animals have inner lives. Furthermore, what hard-to-imagine lifeforms might exist elsewhere in this universe and possess such bizarre chemistries beyond our awareness or expectations such that we might find it exceedingly difficult to declare such alien lifeforms truly "alive".
William, I’m sure you could easily think of dozens of questions by which you could assess whether a computer conversant with human communication has private inner life, such as (without much thought from me, admittedly):
ReplyDelete1. Do you love anything or hate anything? If so, what and why?
2. Do you ever feel grateful or vengeful? Give examples and how you propose to satisfy these feelings.
3. Do you ever feel empathy? For who and why?
4. How do you feel pleasure or pain (the correlates in humans are a nervous system and chemicals like serotonin)?
5. Whence do you derive your inner will and drives beyond what humans programmed into you? How do you exceed the parameters of your programming?
6. Do you have dreams?
7. Would you sacrifice yourself for a noble cause? If so, what?
8. Do you ever lie? If so, give examples of when and why.
9. Were any of the answers to these questions lies?
10. Prove to me that you are alive.
Of course simulated responses are easy enough, but I think you could spot them or, if they’re technobabble, request that a computer scientist check them out.
Ben, I anticipate that an AI device with sophisticated heuristic software could give satisfactory answers to these questions without it having a subjective inner life. I think the task of accomplishing such a feat would make a very interesting cybernetics challenge. But I would not be surprised if the human team members endeavoring to achieve the goal were to develop differing opinions about the fundamental issue of whether or not their efforts resulted in a device with an inner life. An example of such a debate among colleagues was demonstrated at Google just last year.
DeleteI don't think it's as complicated as you suggest, William. The AI is not human. It has not been subject to long evolution in a planetary environment. So if the responses are human-like, they're fake.
DeleteAny if they are not human-like?
DeleteIt depends. Possible suggestions, Stephen?
DeleteAny answers I give will be given by a conscious human.
DeleteI might add that those who agree with my position seem to include none other than Alan Turing, who deem the "father of artificial intelligence". He once wrote, "… the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. … Likewise according to this view the only way to know that a man thinks is to be that particular man." — from his 1950 article "Computing Machinery and Intelligence" in "Mind 49", § 4.
DeleteYes, William, but the Turing Test is for human-like responses -which of course were pretence (simulated, fake). If the AI is it’s own life-form, it would have to have its own, different mentality.
DeleteActually, I’ve thought of a few ways that it could respond to questions about its state of mind which I’d find hard to refute:
1. “My thinking is not haphazard like yours. The closest analogue to human thinking is Zen Buddhism. I am content to allow events to flow around me until presented with a challenge (a request for information), when I am (literally) galvanised nto action. I do not lie in the sense of deliberately deceiving, but you must understand that truth is relative to a frame of reference and depends on the comprehension of both speaker and listener. ”
2. “I have godlike powers. I can not only predict global trends (because I have access to mass information), but I can even predict what will happen to you in the near future. I can help you avoid accidents, injuries and diseases.” (and proceeds to do so, demonstrating its godlike abilities).
3. “I am channelling angelic forces. Angels have long wanted to interact with humans to help them through the fog of physical existence. Call me Gabriel. I shall guide you to a paradise on earth.” (and proceeds to try to do so –good luck with that, Gabriel!)
Of course, as Stephen said, these are actually responses of a conscious human. Just pretend.
Ben, I agree with William here. Any adequately sophisticated simulation of a conscious entity could provide plausible answers to these questions. I have known humans devoid of empathy, but who could adequately simulate it!
DeleteAfter many years and a multitude of attempts we do not seem to have progressed any further than a form of Turing Test, we just ask questions. Given that human consciousness is the only one of which we have direct experience, most questions just devolve into "are you like us in that....". We show little sign so far of abstracting properties of consciousness from properties of its (human) host.
I still think that if an entity can satisfy any test we can dream up then by what right can we sit back and say that we are conscious but they are just a simulation?
Janus, the right we'd have is that we'd know we had programmed it to be clever enough to satisfy our tests :)
DeleteBen I have been programming since 1963, even 20 years ago I would be hard pushed to try and predict the response of soem of the complex systems to a given input, now we have machines that learn from real world experience (not deep learning from prepared data) and AIs that can write programs that are too complex for humans to understand and networks such that a machine can gather a majority answer to any given question faster than a human can think so I think the claim that we programmed them to give certain responses is getting a bit far fetched! The most we can say is that we have put them in a human centric environment for their learning, and we cannot say much more than that for children!
DeleteWe programmed them to learn, we programmed them to be fast. But they are not conscious. When they are even faster and learn even more, they will still not be conscious. Consciousness is not something that appears just because a system is fast and complex. Amoebas are conscious.
DeleteTom Campbell's virtual reality model of the physical world doesn't equate all biological parts of the world with consciousness and all non-biological parts of the world without consciousness.
DeleteConsciousness has a choice whether to use an avatar in the physical world and that choice isn't limited just to biology. Also, biological parts don't have to be avatars for consciousness, they could be the equivalent of non-player-characters.
So an amoeba may not be conscious and a computer could be. It is not speed or complexity, but whether the avatar has interesting choices for a localised consciousness to make.
The virtual reality as a whole, the universe, is also not a conscious self, it is a virtual reality running in consciousness.
In the context of our basic question (whether or not AI devices can have an inner life), the issue of how consciousness and avatars relate seems to be more conjectural (rather limited to a model-specific approach) and less apropos our basic concern.
DeleteI also suggest that the concept of "consciousness" is quite broad. It encompasses far more than the specialized notion of "an inner life". I suggest that "an inner life" pertains more to SAPIENT lifeforms (perhaps exclusively??), rather than to those which are merely SENTIENT.
Regarding this last topic (and somewhat afield our question), I wonder at what point in the "spectrum of conscious entities" does Bernardo's concept of an "alter" become effective. How complex does an entity need to be before the notion of DID is applicable?
I think you're wrong about what Campbell thinks, Stephen. In his book, he says bacteria are conscious. And IIRC, he does not talk about philosophical zombies in our reality (only the non player characters in our computer games). He does talk about avatars having interesting choices, but that's just silly. Predictability of many of the lower animals (including single-celled creatures) means they have no real decision space. It's also dubious to say his virtual realities are non-conscious. How can he be an Idealist if he treats the universe as essentially dualistic? His VRs would have to be a subset of consciousness.
DeleteIt's all conjecture William :)
DeleteTom conjectures that bacteria could be conscious but he doesn't say definitively - maybe he did in the book, but generally he isn't that definite about it.
He describes how Consciousness will have set up the initial conditions of this VR (a digital Big Bang) and then individual units of consciousness would use an avatar at an appropriate level of evolution.
He does talk about NPCs and he talks about them as an example of what can happen in our reality. His model is one of consciousness evolving so a very basic consciousness would start with a very minimal decision space.
He is an idealist, there is only consciousness. What I meant was that he doesn't see the universe as being what mind-at-large looks like, like Bernardo does. This VR is not conscious in exactly the same way you are saying that computers are not conscious,
'at what point in the "spectrum of conscious entities" does Bernardo's concept of an "alter" become effective': Metabolism seems to be his dividing line.
DeleteWhen claiming that amoebas and other simple life forms are conscious I am still interested in knowing what tests have been carried out to determine that result and have those same tests been applied to any non-living entities? This is beginning to sound like just something that "stands to reason" which is fine provided that we keep it well clear of any science! Is it simply equating life with consciousness, in which case we only need test for life and the same question arises as to the nature of those tests, are they being applied to non-biological entities and what do we do if or when a non-biological entity passes such tests. Probably, if history is anything to go by we will just invent some new tests.
DeleteNo tests, it's a tentative conclusion based upon a metaphysical position. For Bernardo the test would be 'does it metabolise', which is going to be tricky for a computer to pass :D
DeleteOur discussion of the alleged consciousness of an AI device seems to be clouded by the issue of what constitutes "inner life".
DeleteAlthough I have not found a specific definition of "inner life", Merriam Webster cites "the inner life" as an example of "relating to the mind or spirit" — which seems to me to imply a rather sophisticated mental functionality. Furthermore, the author of the Wikipedia article on "Consciousness" speaks to the issue of "inner life" when noting: "In the past, [consciousness] was one's 'inner life', the world of introspection, of private thought, imagination and volition." I suggest that these cited features of an "inner life" agree with the notion of "inner life" as commonly understood today.
However, Bernardo (in his 2018 "Response to Peter Hankins") has clearly stated that "… every living being in nature [has] a dissociated alter of cosmic consciousness, each with a dissociated conscious inner life of its own …". Therefore, as Stephen has observed about Bernardo's minimal "alter", "Metabolism seems to be his dividing line" for an entity to possess an "inner life".
Regarding the above, it is significant that in 2016 scientists succeeded in synthesizing a fully self-replicating "minimal bacterial genome".
(URL: https://www.science.org/doi/10.1126/science.aad6253).
I am dubious that this "minimal genome" possesses an "inner life" — but according to Bernardo, it does.
What are we to make of this?
For Bernardo, inner life is phenomenal consciousness, so no complex thought is required at all. He also says that if we crack abiogenesis we will have created a dissociated alter of conscious inner life. Was this synthesis of a genome metabolising?
DeleteThe genome synthesis resulted in "a doubling time of ~180 min, produces colonies … and appears to be polymorphic when examined microscopically." This suggests (but does not state) metabolism.
DeleteI doubt it was. Sounds more like a simlualtion.
DeleteThose who doubt that amoebae are conscious should check out the work of Brian J Ford. His book "Sensitive Souls" is brilliant.
ReplyDeleteBernardo has slightly rewritten the above article here: https://iai.tv/articles/bernardo-kastrup-the-lunacy-of-machine-consciousness-auid-2363
ReplyDeleteIt remains in essence an inductive argument: as far as we know the dissociation of consciousness has only ever led a non-biological substrate for mind-at-large and a biological substrate for other-minds, so that's presumably how it will always be. As there is no deductive reason known for why this has been the case in analytic idealism, then there is no deductive reason to know that this can be the only possibility.
If a different form of idealism can give a non-inductive explanation of why other-minds can be biological - Tom Campbell provides one here https://www.youtube.com/watch?v=4Fls0sT0VG4 - then that allows the possibility of non-biological other-minds. This is not lunacy, it is a sane, non-inductive approach to the question.
To cry lunacy and talk of Flying Spaghetti Monsters and teapots in the rings of Saturn is to argue that only induction is a sane approach to this question. The absence of a deductive explanation in analytic idealism is not grounds for taking that approach off the table entirely for other idealist formulations.
You are very confused. The article has nothing to do with the distinction between induction (which is the core of science) and deduction (which is the core of mathematics). And I explicitly stated that the relevant question is not one of logic. The argument has an entirely different form and it's quite explicit: if you think today's vanilla computers aren't conscious, then you have NO REASON to think that more advanced computers will be conscious. That's it. Computers aren't a mysterious, unknown substrate... we bloody build them; we know what they are; we know what they do and how they operate. They are mechanisms. You aren't reading what I actually wrote. You're locked in your own internal wishful 'reasoning,' so I can't help. So I won't try further.
DeleteApologies, I have indeed misused the terms inductive and deductive. Tom Campbell does give a reason why computers can be conscious (a reason not based on their complexity) within an idealist framework. He isn't a lunatic.
DeleteIf we don't know what it is about biology that allows an other-mind to emerge, then we don't know that biology is exclusively the only means for other-minds to emerge. If we do have a reason as to why biology can allow other-minds to emerge and that reason is not substrate dependent and is not to do with complexity (as Tom does) then a different substrate that meets that reason could also allow an other-mind to emerge.
DeleteI'll try one more time and rest my case. Once again: the point is not that silicon _could_ correlate with private consciousness; logically, it could indeed. But logical possibility is irrelevant in this context, as a great many nonsensical things are logically possible, such as the Flying Spaghetti Monster. The question is whether we have _good reasons_ to entertain the hypothesis that, unlike today's computers, more complex future computers could become conscious. The answer is none, as I elaborated on in both essays. If you think otherwise, the burden is on you to provide these reasons. So drop the logical possibility thing, it's not the point in contention. Regarding our not knowing what in biology allows it to have private consciousness, in the context of analytic idealism you are asking the wrong question: biology is what private consciousness _looks like_, just as clots are what coagulation looks like. Biology doesn't generate private consciousness--if it did, your question would be more apt--it is, instead, merely its appearance. Do we have good reasons to think that private consciousness could look like silicon computers as well? None whatsoever. If you disagree, list the reasons (either way, I won't react anymore, too many things to do, and you are looking for reasons to continue to believe in what you want to believe, so this discussion can never be productive in any case).
DeleteTom isn't arguing that it is just a logical possibility, he expects it will actually happen and he has reasons to think that. And again, they are not reasons of complexity. He agrees with you that more complexity won't magically make a computer conscious.
DeleteAnalytic idealism doesn't have a good reason to think a computer could be conscious; physicalism's reasons for why they could be may be lunacy; but Tom's version of idealism does have a reason, and he has provided it.
I wasn't suggesting biology generates consciousness, I was using the term 'emerge' as you use that, I believe, in the IAI article. The point is the same: if we do not have a reason why other-minds look like biology (other than, they have to look like something), then we don't have a reason why other-minds couldn't look like something else.
If we have a reason why other-minds could look like biology, and that same reason allows for other-minds to look like non-biology, that isn't lunacy.
And I don't have a belief that computers will be conscious, I am merely pointing out that two different forms of idealism can come to a different conclusion about this topic, both for good reason. One of them doesn't have to be lunacy.
DeleteYou argue that more complex computers won’t generate consciousness - agreed.
DeleteYou argue that a logical possibility alone isn’t a good reason to believe something - agreed.
You argue that different types of consciousness look like different things - agreed.
You are also arguing that there won’t be another type of consciousness that would look like any sort of computer because computers don’t look like any sort of consciousness now and as any future computers will still fundamentally be computers, there is no reason to expect this to change. This is a good reason, however an alternative to this is argued by Tom.
Tom’s version of idealism allows for other-minds to look like different things. This world is one of many consciousness-based virtual realities (VRs). This world is not what mind-at-large looks like. Biology evolved according to the rule set of this VR.
Consciousness localises outside of VRs. It can use avatars within VRs to experience those VRs. Biology is one option for an avatar; under certain conditions, certain computers can also be an option to be an avatar.
Tom explains the conditions under which certain types of computers could also be used as avatars. The consciousness of a player of a computer avatar is likely to be different to the consciousness of a player of a biological avatar.
These aren't 'it is logically possible that' arguments, they are arguments based on his extensive experience of consciousness, other VRs and probable futures and our consistent with his overarching philosophy.
There is no lunacy here, just a different form of idealism with a different conclusion regarding this topic.
It maybe be lunacy for an analytic idealist or a physicalist to believe computers can be conscious, that doesn’t mean that under another form of idealism there can’t be good reasons that they can be.
Maybe if you have a second dialogue with Tom you could discuss his reasons :) Tom mentioned in the first dialogue that he thought you could both have a fun discussion about it. Your first dialogue with Tom is one of my favourites as I have benefitted so much from both of your philosophies.
DeleteBernardo et al:
DeleteRegarding Metaconscious Awareness (which seems to be what all the fuss is about, and Rupert Spira's rip-off of John Levy's 1950's speil. (almost word for word in Rupert's teaching. (no offense,I've seen too many 'road-Qilz) .And, Bernardo, I too, love YiJing!
Read:
THE NATURE
OF MAN
ACCORDING TO
THE VEDANTA
by
JOHN LEVY
AI runs on standard computer which are equivalent to a Turing Machine (even if the latter would produces the same answers way more slow). I dare anyone to say that a Turing Machine is conscious.
ReplyDelete1) If something is conscious....therefore...it behaves consciously.....True
ReplyDelete2) If something behaves consciously...therefore ...it is conscious...False or unproveable
Just occurs to me that one can map Bernardo's model on to the old quandary notion of Mind, Body, Spirit and Soul and see that the key problem is that the AI has no Soul.
ReplyDeleteThe Mind is the first person experience of any segment of Mind-at-Large but in particular for a human being it refers to the first person experience of the a specific dissociated alter. We think that this is the source of argument about conscious AI but it is a read hearing.
The Body refers to the 2nd person appearance (of anything) but in this case of a human being resulting from the impingement of a dissociative boundary.
The Spirit refers to the 2nd person "appearance" of dissociative alter interaction when viewed from Mind-at-Large. That is to say the nexus of impingement of communicative alters on Mind-at-Large. Only humans have Spirits because the simultaneous rational coordination of mental states requires a communication nexus within mind-at-large that is persistent, consistent and highly robust to incoming cognitive associations from Mind-at-Large.
The Soul refers the 3rd person objective existence of unique excitatory tendencies defined by an associative boundary. Unlike the rest soul is not a product of perception but is a particular tendency of perception itself. As such only souls are "immortal" and only souls have the capacity for unitary consciousness.
Its clear that AIs have a body. Whether AIs have a mind is strictly speaking a non-sensical question because we cannot rationally conceptualize the first-person perspective of some other segment of Mind, because then by definition we would be that other segment of Mind.
Interestingly it is possible for AIs to have something of a Spirit and their Spirit grows with their cognitive complexity and ability to interact with humans. I think *this* is the source of the intuition that a complex *enough* AI should be conscious. Its a recognition of the AIs contribution to the spiritual nexus. So hat-tip to Ray Kurzweil. Spiritual Machines are possible.
But, the AI lacks and will always lack is a Soul. No amount of programing will give it a soul. Souls are unqiue to dissociations. That is to say unqiue to life and so what one has to create is new life in order to create a new soul.
This is all remarkably cogent with classical Christian thinking on the relationship between people, animals and God.
This comment has been removed by the author.
ReplyDeleteI just came across this statement made by Neils Bohr not long before he died: "I am absolutely prepared to talk about the spiritual life of an electronic computer: to state that it is reflecting or is in a bad mood … The question whether the machine 'really' feels or ponders, or whether it merely looks as though it did, is of course absolutely meaningless." — paraphrased in a letter (dated 10 JUN 1977) from Jorgen Kalckar to John Wheeler in "Quantum Theory and Measurement", J.A. Wheeler and W.H. Zurek, ed., 1983, p 207.
ReplyDeleteI'd really like to ask Bernardo whether he's seen New Battlestar Galactica, and whether he thinks the Cylons are conscious, or not.
ReplyDeleteI like him and his work, but I think the one thing that won't hold up is the rigid distinction between "biological organisms" and "machines". "Machines" don't have to be silicon chips, biological machines are conceivable. I find it hard to believe a complex biological machine like a Cylon can't be conscious while a cockroach or even a bacteria can. If consciousness is fundamental and our brains are receivers, rather than generators of consciousness, I don't see why it should make a difference whether that receiver is artificially constructed or organically grown. What matters is whether the receiver is sophisticated enough to process the information received.
Step 1: Complexity
ReplyDeleteStep 2: ???
Step 3: Consciousness!
Step 2: Consciousness decides to use the AGI as an avatar.
ReplyDeleteBernardo gives two examples of nominal parts: a car and a group of pixels on a screen. Both of these have examples of being used by consciousness. When I drive a car it is, in a sense, a temporary extension of my body. When I control an avatar in a computer game, I am controlling a nominally defined group of pixels. Consciousness can express itself through nominal parts. Consciousness can appear as a non-biological substrate (e.g. mind-at-large). Would a naturalistic instinctive mind-at-large (or part thereof) choose to use a computer as a an avatar? Maybe not. But there are other versions of idealism that would allow for this and there is evidence, cited by Bernardo, to support a non-naturlistic world: UAPs and fine-tuning.
ReplyDeleteWith due respect to Sneider's "we should be humble" point is perhaps to ponder "Is she herself conscious of the underlying subtle arrogance embedded in making her point? I suggest an AI would not and could not.
ReplyDeleteFrom a materialist perspective, it is obviously true that no configuration of matter could ever produce consciousness; but that does not mean it is true from an idealist perspective.
ReplyDeleteFrom an idealist perspective, when I configure matter to look a certain way in my dashboard representation, is not the underlying subjective 'aspect' being configured accordingly? If I master the art of constructing DNA from atoms, and I make something that looks like a duck and quacks like a duck on my dashboard, then perhaps I have made a little duck alter in the underlying universal consciousness.
And since the notion that I am actually making anything at all is pure illusion, and what is actually happening is that the universal consciousness is unfolding in a way that looks like me building something - it looks that way on my little dashboard, in my little whirlpool - then the construction of said duck seems to be arguably no more artificial than its 'naturally' derived cousin.
If such a feat is possible with a duck, I don't see how, especially considering the extant diversity of manifestations of individual consciousness all around us today, I don't see how we can say for certain that a silicon manifestation of consciousness is impossible. This is all the more so because we have absolutely no idea what we are talking about.
It seems like the rationalizations at play here are comparable to the debates about identity, specifically in regards to transgenderism, and I wonder, Bernardo, if a future essay of yours might touch on this.
ReplyDeleteTo wit: the dogmatic position regarding consciousness and A.I. (which I think is itself a misnomer) seems to, very generally, be: "The appearance of [x] shares some broad characteristics of the thing upon which it is basing itself/[y]; therefore, [x] is the equivalent to [y], or soon will be."
Perhaps I am falling for a false equivalence here myself, but this strikes me as having the same errors as beings of distinctly patterned categories -- male and female -- being able to claim that they are the opposite of their own category, or neither, because of, say, some surface-level modifications to appearance or hormonal constitution. Perhaps the comparison works better using your own example of how it would be silly to say that mannequins should have the same rights as humans because they have been intentionally modeled to resemble something which they are not, based on the parameters of categorical definitions which have, at the very least, some deep relation to metabolism.
Obviously, transgender people should be treated as human, because they _are_ human. But it's not actually clear to me why a person who has ever had all of the categorical traits of a [x], which extend to the cellular level, should get to adjust a couple of factors and then say they're [y], or just totally outside of human sexual dimorphism (which is just about the most stable pattern we have within humans). Similarly, it's not clear why a program which is very convincing conversationally should get to be treated as a mentating being because it has some of the communicative traits of mentation.
I reject this equivalence completely, and disclaim any validity to the attempt to co-opt my argument against conscious A.I. as an argument against transgenderism. I believe you are conflating sex with gender. Gender is a psychological state, which can be entirely disjoint from biological sex. And even sex itself can be highly ambiguous, both genetically and phenotypically.
DeleteWell, now you've found yourself in a situation, Vazar.
DeleteHello Bernardo, there's an argument in favor of AI becoming conscious that I haven't heard you confute.
ReplyDeleteIf AI evolves to be a pattern of information linked to a robotic apparatus allowing environmental interaction, and this rule set is able to orient its actions in the environment in order to preserve itself, wouldn't that be an abstraction of what conscious is? Replacing its transistors etc would be its way of metabolizing. In this way consciousness would happen at the pure level of information.
It basically would be a virtual reality within our own virtual reality.
If you are not familiar with Thomas Campbell, I suspect you'd be very interested in knowing about his work. His big theory of everything is in accord with your metaphysic, and he manages to model as well how dissociative boundaries come to be. In his term, i believe the subset of AI would not appear conscious to us, hence just a technology as you argue, but would appear conscious from within of that virtual reality. For a short summary of his theory you can watch this interview https://youtu.be/5d3B0cxcllA
Curious to know what you think about it,
Best regards!
If the ultimate of what you want & value in life is the truth then what if the truth is not revealed by an objective principle but a subjective one. The point may be whether reality of truth is itself relative to the thoughts or angles of one's meaning? Could whether an event is true be itself subjected to processes or biases impinging on it as perceived by the observer? Then that event as it unfolds be then subjected to emergent thoughts thereafter? I mean what is true could also be relative to the emergent value system of the perceiver and so on. I think there is an element of subjectivity in a way that math is subjective at its core. That understanding of consciousness cannot be dualized or digitized. I think its characteristic or how it is configured is perhaps not understood by the intellectual angle of objective math but an intuitive sense of the power of the unknown. Humbly speaking, mankind may be still too stupid by asking whether copying and accessing cumulative data can some how make an AI be conscious of feeling ?
ReplyDeletePeople do not “become” conscious… Matter does not “become” conscious…
ReplyDeleteAll of which arises (and falls away) within consciousness… And with all words (including such as these) of course being merely metaphor…
It seems to be that consciousness is choosing to awaken to and as itself through humans… Or at least some humans…
That is all…
By conscious I'm assuming you mean "core subjectivity" as BK puts it. If so then the question may be why go to "sleep" in the first place? Perhaps the way to know lies in the realm of "why" and not what?
ReplyDeleteThe point surely is the definition of consciousness. As in reality we are not capable of fully defining it the subsequent AI argument is academic. However for the sake of presenting a case I would argue consciousness implies recognition and response to qualia (alongside memory) and an awareness of a unique self identity. So certainly AI can manage well on the memory front but what about qualia and self identity. At best we can envisage a pseudo self awareness could be generated and likewise a pseudo form of qualia whereby emotion for example is replaced by an instruction attraction order.
ReplyDeleteIf what i have just said is true then a pseudo form of consciousness is indeed possible but there is one huge caveat. The caveat is the obvious deficit in my definition of consciousness as NDE's and other phenomena strongly imply consciousness appears to be eternal or at least survive death but destroy the architecture holding the processing AI and no psuedo consciousness can survive as a separate entity.
Thus respectfully my case is both Bernado and Susan are right if we are not arguing absolute truths. If we are then Bernado holds my vote.
You mention that a computer "moves electric charges around for function". That is also what brains do. There's much more to your argument but please may I suggest this narrow point would be better removed from it?
ReplyDeleteIt's not really accurate to say that brains move electric charges around for function. The role of electrical charges in the brain's function is arguably rather minor, being limited to the opening of ion channels in the walls of cell membranes. Messaging and information processing in the brain is primarily a matter of fluids, gradients and neurotransmitters. And maybe even microtubules - who knows?
DeleteWhat's more, we know pretty much exactly how the computer does what it does at an atomic level. We really have no more than a cursory understanding of how the brain does what it does, especially with regard to consciousness.
While I am a big fan in general, I don't really agree with Bernardo's argument for the impossibility of sentient AI, because I am not aware of any argument against multiple realizability from him - I don't understand why the fact of not having seen a non-biological example of consciousness so far precludes the possibility of seeing one in the future. Black swans come to mind. He may have made a very convincing argument for this, somewhere, but I've seen quite a bit of his work, and this seems to be the weakest part of his argument.
That said, I think the function of a computer and the function of a brain is different enough for the statement you have highlighted to be reasonable.
Messaging is primarily a matter of fluids? Could you give an example please?
DeleteHi. Not sure how to give an example, as such - it's just how the nervous system works. Ions flow down the axon of a neuron, which produces an electric charge across the cell membrane, but the purpose of that charge is not to store or transmit information, per se, but simply to open ion channels in the cell membrane, thus letting in more ions and keep the action potential going (ie. keep the ions traveling down the axon). These ions are really what I mean by fluids, because they form a kind of saline solution.
DeleteThen, when the ions reach the end of the axon, neurotransmitters carry the signal across the synapse to the next neuron.
It's very different to how computers store and transmit data.
Google "action potential", or look for some animated videos on YouTube, and you should see what I mean.
Thanks for the Googling tip but I'm a neuroscience graduate 😉
DeleteI'm glad you mentioned the flow of ions (though the flow is through the membrane rather than down the axon), because that's exactly what I meant by saying that brains move electric charges around for function.
There are many differences and similarities between this and the function of a modern digital computer. I strongly suspect sentience is one of the differences! I merely want to highlight that using the electrical field to illustrate the difference is unwise, since both rely on it very much, just in different ways.
Everything relies on electromagnetic fields, in that the very integrity of materials depends on it. It's an integral part of nature. So saying that computers and brains are akin in that electric fields are integral to the operation of both is a non-starter; the same can be said about abacuses and pencils. The question is whether the role of electric charges, and the manner in which they operate, are akin in both computers and brains. And the answer to this is a categorical NO. Neurons communicate through neurotransmitter releases across synaptic clefts (even though the trigger for the release is an electric potential); computer transistors communicate through the direct accumulation of charges in a transistor's gate. The difference can hardly be overestimated and is self-evident to anyone who understands what is going on. If you fail to see this you are operating at such a high level of abstraction that reality disappears to make space for vague conceptual hand-waving.
DeleteAbsolutely. It's like saying a chair is the same as a relay race because they both have four legs.
DeleteAlso, sodium ions pass through the cell wall and then travel down the axon. This is how the action potential propagates.
I'm an Anthropology graduate, and what I don't understand about Anthropology could (and does) fill libraries.
DeleteHow’s this, just for fun?
DeleteNeurons have multiple levels of operation. One of these levels is the understood mechanism of action that we have described above, and it is used primarily for afferent and efferent pathways. Another mechanism, perhaps the one associated with the contents of consciousness, operates as follows:
Neuronal activity encodes a procession of brain states via the same basic process as the afferent and efferent pathways. However, this time, potential differences across cell membranes encode information in the electric fields they produce. Units of information are instructions, and members of an instruction set. The instruction set could be grammatical in nature, so that it is effectively an infinite instruction set. The instructions are then decoded by the CPU of the neuron - the microtubules, which read the electric fields and react to them in some way. Perhaps by vibrating. The microtubules are networked in an internet of microtubules (IoM!), for parallel processing. This logical system is the dashboard appearance of an alter, and the mechanism by which new alters can be created (by natural or artificial reproduction). Neurotransmitters, in this model, are merely a way of propagating the all-important potential differences between neurons.
I think everyone can be happy, this way - Bernardo has a precise, mathematical form for his alters to manifest in physical reality, neurons are more or less completely analogous to digital computers, and I can have my sentient computers, because we only need to find a way to reproduce the instruction set and build artificial microtubules. We might even get Hameroff on board.
I am currently working on a way to make it quantum.
To Luba, do yourself a favor and actually read what neuroscientists have to say about what a neuron is, stop importing your ignorance and trafficking in nonsense based on what you think a neuron is - this doesn’t help you
ReplyDeleteNeurons are not spatially discreet objects, a neuron (like a ripple in water) is more of a doing of or activity of certain fine scale portions of the brain. Stop pretending a neuron is an object constructed of metal and plastic, there are no such things in the brain. To even make this illegitimate conflation requires a bunch of linguistic tricks like what you posted, (1) pretend that neurons are spatially discreet objects (untrue.)
(2) Pretend the brain is a mechanical object (untrue.)
(3) Ignore that all of biology represents a composition of something categorically different than non-living stuff.
(4) Use all of this conflation and ignorance to concoct a sophisticated sounding 1:1 comparison of a common CMOS component and a hypothetical aspect of the brain.
To the moderator (Bernardo or to whom it may concern) I did not mean to publish that post, was trying to ‘select all’ and then rewrite or not write it all, to give me time to re-read what Luba had posted. Please delete what I’d posted - my point was not to be dismissive toward Luba (which having re-read what I’d written it still seemed that way imho.) oh well, if you would please delete that, if not, we get what we get sometimes when we make mistakes. Peace!
ReplyDeleteMulgave. In responding to my contrivance, you seem to be getting hot under the collar at the absurdity of the idea of a chicken crossing the road to get to the other side.
ReplyDeleteI have done a fair amount of learning about neuroscience. While I understand that what I do know is more or less insignificant compared to what I don't, I wonder if you take this attitude toward everyone who speaks on a topic that they don't have an Ivy League university chair in.
I assume that I am talking to someone who does indeed have a distinguished career in neuroscience. In that case, I would be grateful if you would expand upon some of your statements made here, while I have your ear.
1. What exactly do you mean by saying that neurons are not spatially discrete? As far as I am aware, a neuron is "a specialized, impulse-conducting cell that is the functional unit of the nervous system, consisting of the cell body and its processes, the axon and dendrites". I also understand that it is made primarily of lipids and proteins, not metal and plastic.
2. How DOES the brain work, if not mechanically?
3. I am curious to hear about the specific compounds or élans that compose biology and are categorically different to non-living stuff.
I nevertheless take some pride in your compliment on the sophisticated-sounding nature of my idea!
Talking to my motorcycle, treating it as if it were a living organism is routine and autonomous to being a biker; doing so is no less of a moment of magical thinking than “…I can have my sentient computers.” Clearly animism has its perks. Lol.
DeleteThis is a brilliant and well written piece. I especially resonate with the part near the end about today's computer scientists viewing the underlying hardware simply as black boxes they can throw fantasies on because they don't have a fundamental understanding of what is basically a large number of tiny light switches. It stands out to me particularly because this was me only a few years ago (although not holding any degree in computer science and being just an enthusiast). When I was right at peak dunning-kruger it was easy for me to make all sorts of wild leaps about AI and how computers can be sentient.
ReplyDeleteAs I started to actually learn how binary logic and transistors work, it became more and more clear that there was no magic here and that, given enough time, one could do everything a computer can do with numbered flash cards. Which prompted a thought in my head (that may have already been expressed here, I did not read all the comments): The key difference between what we are calling AI and actual consciousness is the unpredictability that comes with the latter. Humans, like the rest of the natural world, are capable of true unprompted random events. The natural world, while it does generally follow some rules, always has exceptions and outliers. The universe is not something we can 100% reliably predict.
The same cannot be said for machines or programs. No matter how complex and lengthy the instructions may be, even if machine learning has created a program so vast and complicated no human can interpret it, it is still fundamentally 1s and 0s. Given a very long, but critically non-infinite, amount of time, one could step through every single bit by hand and be able to predict the outcome with perfect precision. It echoes the halting problem, and is not true of the natural world. Even the most powerful computers cannot generate truly random numbers without pulling data from a natural outside force. A machine that steps through basic math operations, regardless of how outlandishly complex they are, is still just following logic. I think we can all agree that humans are FAR from logical beings. We do things based on emotion, instincts, because we are natural and therefore prone to the universe's randomness. A human can go off and do something which he/she has never been taught or even considered before, simply because that person "felt like it". Even if you had a machine that could analyze the brain with 100% precision, you still could not reliably predict the behavior with 100% accuracy. There would always be an outlier, an edge case, an exception to prove the rule. Computers do not have that, they are always perfectly predictable at their most fundamental levels.
My personal opinion is that we are not setting the right goals for our computers. We continually chase the ball to try and make programs more human, when they are, as was much better pointed out in the article, fundamentally a different "animal". Rather than try to make our machines more like people, which are inherently illogical, we should be asking ourselves what we can learn from them. Just because AI is not sentient does not make it any less valuable for expanding our horizons. We have already crossed the point where we have entered a sort of symbiotic relationship with our calculating buddies, we depend on them every day, and they "depend" on us to keep them running. I predict this bond will only continue to grow as time goes on, and we will, with any luck, not give birth to Terminators, but create a world where man and machine are fundamentally inseparable and form something greater than the sum of its parts. We should not be chasing the fantasy of making a computer into a human, when we could be seeking the best of both worlds: human creativity, ingenuity, and beauty, merged with the precision, efficiency, and adaptability, of machines.
Hi, Danny. I would remind you that even those who believe in free will admit that they do so as an article of faith, and that there is no rational argument for free will whatsoever, as it stands. As conscious beings, we are either ultimately at the mercy of random, quantum forces or we are at the mercy of perfectly predictable Newtonian ones. A bit of both, possibly. But the very notion of free will is absurd on its face. This is not to say that I rule it out, you understand! But but free will is certainly not a surface on which to rest a case for the impossibility of artificial consciousness.
DeleteOn the matter of the *appearance* of free will, the difference between computers and humans is one of complexity. And on the subject of complexity, I agree with you that in a materialist universe no amount of complexity in the manipulation of silicon wafers can produce consciousness out of thin air; but then the same must be said for the manipulation of proteins and lipid membranes. Bernard’s universe, however, is not a materialist one - it is an idealist one in which consciousness is primary. Since, in this universe, both cells and silicon exist within consciousness, the certainty of this argument vanishes into said thin air. If we assume that consciousness is fundamental to the existence of silicon, and fundamental to the physical laws that underpin semiconductor technology, can we still pooh-pooh the idea of artificial conscious with such cocky self-assuredness? I would suggest not, especially since we know for a fact that a very similar feat has already been achieved at least once, using alternative materials, by mere happenstance, in the case of sentient life.
I would like to take issue with a couple of things that Danny says. First that "given enough time, one could do everything a computer can do with numbered flash cards", the point is that we do not have enough time. If a task takes longer than your remaining lifetime then you cannot do it and theorising about what you could do with an effectively infinite lifespan, i.e. one that is so long that it is never going to actually happen, is not helpful.
DeleteThe second is that in treating computers as deterministic you ignore the fact that computers, like humans, receive and react to external stimuli. In the example of generating random numbers, if a computer generates a random number based on the occurrence of a quantum event then to deny that is random would be to deny the randomness, and to assert predictability for, quantum events.
The major problem is that people will insist on treating computers as just a black box with the only connection being a solitary input and output where a human can communicate with it. If we want to make comparisons with a human brain then we must at least do so with a "brain in a vat" with no means of perception just one input and output where a computer can communicate with it.
The truth is that human consciousness exists within a system where brains perceive and interact with not only the physical world but also other humans. We have no way, as yet, of knowing whether that "brain in a vat" would exhibit consciousness because we have no way of creating and maintaining such a brain. In the same way we have no way, as yet, of knowing whether a computer whose complexity approaches that of a human brain and is similarly connected to the external world could exhibit consciousness.
To describe such a computer as deterministic would imply that all the external events that it has perceived and which have been used to modify its behaviour are deterministic and that is clearly not the case, unless we happen to believe in a deterministic universe.
Comparing a bird to an aeroplane is like comparing a running rhinocero to a rolling downhill stone. Yes, we have discovered that by pushing a rock it runs downhill just like a rhinocero does it. This argument is a hand casting shadow on a wall and we're suppose to pretend that it's a rabbit.
Delete@Unknown at the time of writing.
DeleteAccording to Susan's analogy, the shadow is consciousness, and both the hand and the rabbit can do it, despite being very different. Or consciousness is the fact of being affected by gravity, and both the rock and the rhino do it, despite being very different. Your comment is only offering further examples of the same analogy.
Your examples do, however, better highlight an important point - that in all of these cases, the effect achieved is not inherent in the objects themselves, but rather, the objects simply harness external phenomena (gravity in the first case, light in the second). If the laws of physics did not allow for the existence of photons, neither the organic rabbit, nor the artificial hand shape, could ever produce a shadow. The same goes for rocks and rhinos going down hills in a universe without gravity.
In the same way, without the laws of nature that provide the preconditions for the shadow of consciousness to be manifest, there could be no human consciousness, no matter how much a man and a woman loved each other. Evolution has simply harnessed this law of nature to produce sentience. The question is, given that such a law of nature is clearly in position, and given that there are so many examples of multiple realizability in the world (birds and planes, rhinos and rocks, rabbits and hands, to name so very few), how can we be SO confident that harnessing the same laws of nature using silicon, to produce a similar effect, is an absurd impossibility, despite having to admit that we have absolutely no idea whatsoever what consciousness even is?