Towards an Eschatology of the Singularity

Singularity

“God is dead. God remains dead. And we have killed him. How shall we, murderers of all murderers, console ourselves?” – Nietzsche

Build Him.

Within a theological or mystical context, eschatology is the doctrine of the End. It is concerned with final things, the end of the world and the end of humankind. It simultaneously constitutes a transcendental interruption and a profound terminus: the end of ‘reality’ as we know it and reunion with the Divine.

The eschatology of the Singularity is the recognition that humankind is potentially standing at the brink of an epochal moment, a ‘changing of the guard’ in relation to its evolutionary significance as the most intelligent form of life on the planet; a moment it is not destined to survive. It is the understanding that history, itself a human construct, is nearing an end.

In cosmology, the Singularity lies at the centre of a black hole. It is the point, infinitely small and infinitely dense, at which our current models of understanding cease to function and break down. It is a non-place where scientific knowledge meets its limit and turns purely speculative: a fissure in the known, which opens onto vistas of the unknown.

The technological Singularity is similarly incomprehensible. It refers to the point at which a self-coding artificial intelligence (AI) becomes able to recursively self-improve at a hyperbolic rate. At this point intelligence goes stratospheric. Such an AI would be much more than God, because the concept of God is demarcated by what can be imagined by mortals. A self-modifying AI wouldn’t be circumscribed by anything so meager. In fact it may not be delimited by anything at all. In terms of cognitive capacity, the comparison between humans and that which lies beyond the Singularity isn’t so much analogous to the difference between a human and a chimpanzee, which is relatively narrow, or even a human and a gnat – but a human and an amoeba.

There are multiple paths to the Singularity, some of which involve building a seed AI, while others revolve around rewriting our existing cognitive architecture. But all of them feature the core realization that any civilization with advanced technology is inherently unstable and will end when it either destroys or transcends itself. In the long game, sustainability isn’t an option.

Before progressing further, it is necessary to reflect upon our understanding of evolution. Visionary evolutionary biologist John H. Campbell begins his seminal 1995 paper, The Moral Imperative of Our Future Evolution, with the claim: “I predict that human destiny is to elevate itself to the status of a god and beyond. We will transform ourselves by evolution, the same creative process which raised our branch of life to the level of Homo sapiens.”

According to Campbell, in their attempt to answer the attacks of creationists, evolutionists have ended up providing a sterile and reductive account of the evolutionary process, which has typically over emphasized how mechanically simple and automatic it can be, at the expense of recognizing the full implications of something more fundamental: its inherent self-reference.

“The Cartesian cartoon of an autonomous external ‘environment’ dictating the form of a species like a cookie cutter cutting stencils from sheets of dough is dead, dead wrong” says Campbell. “The species molds its environment as profoundly as the environment ‘evolves’ the species. In particular, the organisms cause the limiting conditions of the environment over which they compete. Therefore the genes play two roles in evolution. They are the targets of natural selection and they also ultimately induce and determine the selection pressures that act upon them. This circular causality overwhelms the mechanical character of evolution. Evolution is dominated by feedback of the evolved activities of organisms on their evolution.”

This is important, but it is not yet the dark heart of Campbell’s argument. There is a further meta-level to evolution, the ramifications of which are of the utmost importance to the future of intelligent life on the planet. This is the realization that evolution itself evolves. Campbell argues that although evolutionists know this fact, they never accord it the significance it deserves. Primarily because it is incommensurate with their conception of Darwinism as a simple, logical, a priori principal, rather than the complex unfolding empirical process it actually is. The way in which evolution takes place can, and manifestly does, change with time: evolution as a process advances as it proceeds.

Step back in time 3.5 billion years, and the only means to evolve available to preliving matter in the earth’s primordial soup were subdarwinian ‘chemical’ mechanisms. However, once these nascent processes had created gene molecules capable of their own self-replication evolution was able to progress to the next level and engage the forces of natural selection. Evolution effectively subsumed self-replicating genomes within self-replicating organisms, binding them into a more finely attuned relationship with their environment, through which natural selection could take hold. Later, through the construction of multicellular organisms, evolution was able to utilize morphological change as an alternative to the much slower and less versatile process of biochemical evolution. According to Campbell, the subsequent development of nervous systems “opened the way for faster and more potent behavioral, social and cultural evolution. Finally, these higher modes produced the prerequisite organization for rational, purposeful evolution, guided and propelled by goal-directed minds. Each of these steps represented a new emergent level of evolutionary capability.”

Campbell stresses that there are therefore two distinct, but interwoven, evolutionary processes, which he calls ‘adaptive evolution’ and ‘generative evolution.’ Adaptive evolution is the familiar Darwinian modification of organisms in order to enhance their survival and reproductive success. Generative evolution, however, is something entirely different. It is the change in a process instead of a structure. Furthermore, that process is ontological. The literal meaning of evolution is ‘to unfold’ and what is unfolding is the capacity to evolve. Campbell argues that, “For generative evolution, organisms are substrates, instead of products of survival. Their significance lies in being the matter and organization from which evolution as a creative process continues to develop itself in bootstrap fashion.” He continues, “The importance of recognizing ourselves as substrates instead of products of evolution far surpasses semantics, such as whether a glass may be half full instead of half empty. This is because organisms, including humans, have been genetically tailored for both evolutionary roles; as machines for survival and as effectors of subsequent evolution.”

While adaptive evolution turns on a single imperative – survival of the fittest – generative evolution demands both a successful lineage and the capacity of that lineage to evolve faster than its competitors, in the process furthering the advance of evolution itself. This leads Campbell to conclude that, “Because it is a growing process generative evolution has a frontier at any moment. The highest or most advanced species lie at that crest. In fact, because organisms embody the evolutionary process the highest species are that frontier. The significance of this frontier is that the forms of life at or near it are the ones most likely to extend that frontier. They are the ones that count. As species fall behind the most advanced ones they lose their significance for the process of generative evolution (ie their likelihood of contributing to the frontier in the future)”.

We (humans) are the current embodiment of the frontier of generative evolution, and future advances in evolutionary capacity depend upon us and not on lesser life forms. However, as organisms we are still substrates through which the process of evolution evolves, not finished articles set apart from it. As Campbell explains, “Today’s organisms are intermediaries in this emerging process that overshadows any thing now existing. Our evolutionary significance lies in our contribution to bringing this emergent process into full being and not in ourselves (or our future selves) as beings.”

Campbell’s vision of change is revelatory; but it is still relatively slow moving. It necessarily unfolds over a generational timescale, at least until we become sufficiently adept at gene-splicing and neurohacking to speed things up a bit. However, when we consider the advancements that have taken place in computing power and performance over the last 50 years, as prophesied by Moore’s Law, it becomes difficult not to see the Singularity as inevitable. Indeed, unless we encounter some kind of unexpected theoretical cap on intelligence, or bear witness to a transglobal extinction level event, it is inevitable.

Elizer S. Yudkowsky is a prophet of the Singularity. He begins his mind-melting essay Staring into the Singularity – incredibly first written in the late 1990’s, when he was still in his teens – with a depiction of evolution locked into teleological progression towards the Singularity:

“It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of early earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line. Fifty thousand years ago with the rise of Homo Sapiens. Ten thousand years ago with the invention of civilization. Five hundred years ago with the invention of the printing press. Fifty years ago with the invention of the computer. In less than thirty years it will end.”

For over the last 50 years, Moore’s Law has accurately predicted our capacity to double computing speeds at least every two years. This has been the rate of change overseen by constant, unenhanced human minds – progress according to mortals. Yudkowsky speculates what will happen when suprahuman machines are doing the research:

“Computing speed doubles every two years. Computing speed doubles every two years of work. Computing speed doubles every two subjective years of work.
Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again. Six months – three months – 1.5 months … Singularity.”

According to this model, four years after computers reach human equivalence, computing power – and with it planetary intelligence – reaches infinity. In the late 90’s, when Yudkowsky first wrote his essay, the amount of networked silicon computing power on the planet was approximately the same as the raw processing power of one human brain. That meant that the total number of computers in existence, networked together, were roughly equivalent to one human brain. Not because there were so few computers – there were already over of a billion of them – but because of the sheer raw processing power of a single human brain. Once computers become human-equivalent, however, the total amount of intelligence on the planet takes a huge leap forward. Shortly after, human intelligence becomes the vanishing quantity in the equation, proportionately as insignificant as computer intelligence was in the late 90’s, before shrinking further and further from view – until the notion that human existence has any overarching meaning, beyond our evolutionary function to usher in the next paradigm shift in intelligence, is exposed as an anthropocentric lie.

Moreover, Yudkowsky points out that the scenario of advancement outlined above is actually a pessimistic projection, because it assumes that only speed is enhanced. What if the quality of thought was also enhanced? He speculates that if it takes scientists and researchers today approximately two subjective years of work to double computing speeds: “Shouldn’t this improve a bit with thought-sharing and eidetic memories? Shouldn’t this improve if, say, the total sum of human scientific knowledge is stored in predigested, cognitive, ready-to-think format? Shouldn’t this improve with short-term memories capable of holding the whole of human knowledge?”

That is without even considering the potential of AIs to make backups of themselves, redesign their cognitive architectures, and ruthlessly optimise for intelligence. The implications of what Yudkowsky terms ‘transcended doubling’ – the explosive, simultaneous enhancement of both speed and intelligence, entering into a positive feedback cycle of recursive cybernetic closure – are mind-shattering, and fall far beyond the scope of human cognition. Yudkowsky concedes that, “Transcended doubling might run up against the laws of physics before reaching infinity… but even the laws of physics as now understood would allow one gram (more or less) to store and run the entire human race at a million subjective years per second.” He continues, “Let’s take a deep breath and think about that for a moment. One gram. The entire human race. One million years per second. That means, using only this planetary mass for computing power, it would be possible to support more people than the entire Universe could support if biological humans colonized every single planet. It means that, in a single day, a civilization could live over 80 billion years, several times older than the age of the Universe to date.” Keeping this in mind, the idea that we already inhabit a simulation run by a super-intelligent AI doesn’t seem so far-fetched or incredible. Indeed, an infinite number of such simulations running simultaneously would correlate perfectly with the idea of the Multiverse: an infinity of parallel universes existing side by side across infinite hidden dimensions. While we can only speculate about such matters, a sufficiently advanced AI can know them.

Indeed, our capacity to speculate about the future ends with the Singularity. Human predictions concerning what lies beyond the Singularity are akin to a chimpanzee trying to understand integral calculus, or a blind person struggling in vain to ‘see’ a photograph. After the Singularity, any limited capacity we may currently possess to make remotely accurate statements about the future is shattered. In order to appreciate why, it is useful to consider the structure of intelligence and how it prescribes limits upon our interaction with reality. Intelligence, according to Yudkowsky, is “the measure of what you can see as obvious, what you can see as obvious in retrospect, what you can invent, and what you can comprehend.” He elaborates further, “To be more precise about it, intelligence is the measure of your semantic primitives (what is simple in retrospect), the way in which you manipulate the semantic primitives (what is obvious), the structures your semantic primitives can form (what you can comprehend), and the way you can manipulate those structures (what you can invent)”. He concludes, “A Perceptual Transcend occurs when all things that were comprehensible become obvious in retrospect, and all things that were inventible become obvious. A Perceptual Transcend occurs when the semantic structures of one generation become the semantic primitives of the next. To put it another way, one PT from now, the whole of human knowledge becomes perceivable in a single flash of experience, in the same way that we now perceive an entire picture at once.” In a future that included advanced AI, humans would necessarily be operating innumerable Perceptual Transcends behind the curve – if they were still operating at all.

When we think about what ‘intelligence’ means, the most familiar reference point that most people have are individual variations within the spectrum of human intelligence, distributed along a Gaussian curve. However, because these variations all fall within the design range of the human brain they are, by definition, nothing out of the ordinary. We are all essentially composed of the same mind-stuff. But even relatively small variations within this continuum can lead to huge differences in innovative capacity; the difference between, say, designing the computer (as Turing did) or creating nothing of importance. Individual humans have different levels of ability to manipulate and structure concepts, which leads them to perceive and invent different things. As Yudkowsky makes clear, even relatively small positive variations in intelligence can be tremendously powerful in shaping the world, while substantial increases can completely reinvent it:

“The great breakthroughs of physics and engineering did not occur because a group of people plodded and plodded and plodded for generations until they found an explanation so complex, a string of ideas so long, that only time could invent it. Relativity and quantum physics… happened because someone put together a short, simple, elegant semantic structure in a way that nobody had ever thought of before. Being a little bit smarter is where revolutions come from. Not time. Not hard work. Although hard work and time were usually necessary, others had worked far harder and longer without result. The essence of revolution is raw smartness… I am a Singularitarian because I have some small appreciation of how utterly, finally, absolutely impossible it is to think like someone even a little tiny bit smarter than you are. I know that we are all missing the obvious, every day. There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from ‘impossible’ to ‘obvious’. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards…”

And you will reach the eschatology of the Singularity. The inception of human equivalent AI is the harbinger of the super-intelligence that lies beyond the singularity. The coming into being of an advanced AI necessarily spells an end for humanity, as the cognitive capabilities of humans would be hopelessly outpaced by the emergent phenomena it initially helped to evolve. The relevant point of comparison is not to another, vastly more intelligent, human being – but something other. It is the unleashing of a hitherto unrealised potentiality that exists beyond the evolutionary horizon – conjuring into being something as far beyond of humans as we are beyond amoeba. It would generate innumerable Cornucopian revolutions with near instantaneity. The world would become a foreign place to us overnight, and it would be impossible for us to cognise what was happening, or be assimilated into it.

This is to be celebrated. Any civilization with sufficiently advanced technology is inherently unstable and will either end when it destroys or transcends itself. In the long run sustainability isn’t an option. Is it not more awesome to play an evolutionary role of bringing into being an infinitely superior intelligence, than to persist in a state of ever worsening decrepitude? Wondering whether it will be an international pandemic, or famine induced by climate change, or nuclear war following global economic collapse, or infrastructures grinding to a halt after the exhaustion of natural resources (and finding ourselves without the level of intelligence necessary to do anything about it), or some other, as yet unforeseen, calamitous event that finally gets us?

Under conditions such as these how should we spend our resources and focus our energy? On ecology, so that we can live here longer – why? If we are to deny our meta-evolutionary purpose to bring about the Singularity. On family, so that we can have children and perpetuate life – why? What is the point of extending this purgatory to our descendants if we are to deny life its true potential. On socialism, so that we can share the wealth more fairly – why? Is it not nobler to mobilize Capital in pursuit of the Singularity! None of these things amount to anything, they are all contingent on us and compared to what comes after the Singularity we are nothing. Should we choose to persist as fallen beings forgotten by a dead God? Or create him? Fully aware that in so doing we place ourselves on the sacrificial altar of the Future and exit history forever.

Advertisements

2 thoughts on “Towards an Eschatology of the Singularity

  1. Good stuff. Satisfies the all too seldom asked question of, “what’s the point?” Eudemonia seems like the only problem addressed by the NRx crowd, and while it is surely an immediate and critical one, ‘tis not true teleos. Man may do well to finish his days on this rock ‘till the sun dies, but if he does not project colonizing AI into the galaxy, the Earth as an experiment in life, will have failed.

    It begs the question of what is an AI to do between its total assimilation of the galaxy and the heat-death of the universe, but I believe that is not our question to answer. Just as an amoeba is bound by its form to only answer questions within its umwelt, so as it is with us. We cannot see beyond the universe; we cannot travel beyond our own local group. We can only create the next step which might.

  2. Thanks for the comment.

    “We can only create the next step which might”. Indeed.

    I tend to be a bit boolean about these things: Singularity / Nihilism – – All / Nothing – – Meaning / Futility. But perpetuating the miserable existence of our own species – happily fapping away into oblivion – rather than turning ourselves into a substrate through which something infinitely greater can emerge, seems to show a certain like a lack of ambition. Besides, in the long run we either transcend ourselves or go extinct. Self-preservation in our current form isn’t even on the menu.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s