Before You Can Talk About the Singularity, You Must Define It

If you think about technology, and where it may be taking us, it’s impossible to ignore the idea of the Singularity. But if you’re going to talk about it at all, it’s best to start off by defining just what it is you mean. Different people are using the term for a few different concepts these days. (Though at least the memetic mutation isn’t nearly so scattered as the ridiculous array of meanings and outright hot air clustered around “Web 2.0”.)

The Original Singularity: Mathematicians, Represent!

The original concept was the mathematical singularity: A point at which a given mathematical function’s output is not defined. For example, the asymptotic point in the graph of y = 1/x (the classic hyperbolic curve); when x = 0, y is completely undefined — a literal “divide by zero” error.

This gave rise to the gravitational singularity: A point in space-time where the manifold’s curvature (and hence the gravitational field, and the density of any objects) is either unmeasurable or infinite.

Vernor Vinge’s seminal paper, The Coming Technological Singularity, maintains this idea of “change that becomes too fast to measure”, of graph-lines going asymptotic. Vinge writes: When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale…. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. (In [Blood Music], Greg Bear paints a picture of the major changes happening in a matter of hours.)

Marc Stiegler’s 1989 short story “The Gentle Seduction” also uses the term in a rate-of-change sense, with one character introducing the idea as “a time in the future. It’ll occur when the rate of change of technology is very great — so great that the effort to keep up with the change will overwhelm us.”

Variations Abound

But others are using the term in slightly different ways. Wikipedia’s article on the technological singularity describes it as an event where the rate of change is so great that “the future after the singularity becomes qualitatively different and harder to predict.” This isn’t quite the same idea. Instead of saying that the Singularity itself will be too difficult to comprehend, it’s saying that the time after the Singularity will be too different for us to understand. It’s something like the distinction between a singularity and an event horizon (a boundary beyond which we cannot see). Yes, one causes the other, but they’re not the same thing.

This view is echoed in prominent sci-fi blog io9’s backgrounder, “What Is The Singularity And Will You Live To See It?”, which calls a singularity (small s) “the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history.”

In a more mainstream, pop-culture setting, a recent article on Roland Emmerich’s upcoming project describes the Singularity as “the point in time in which technological intelligence finally supersedes that of its human creators.” This is a very different definition; it references only one, single technological advancement, and says absolutely nothing about an increased rate of change, or about unaugmented humans’ inability to keep up.

That story is probably getting its definition from the Singularity Institute for Artificial Intelligence, which blatantly declares that “The Singularity is the technological creation of smarter-than-human intelligence.”

Famed Singularitarian Ray Kurzweil runs a site at, mostly promoting his book, The Singularity Is Near. In it, he does at one point specifically describe the Singularity as “a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it” — but aside from that, he spends much more time describing things like the run-up to the Singularity:

[N]onbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), “experience beaming” (like Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.

On the even more easily-findable “About the Book” page, he describes “the union of human and machine, in which the knowledge and skills embedded in our brains will be combined with the vastly greater capacity, speed, and knowledge-sharing ability of our own creations” as being “the essence of the Singularity” (emphasis mine). He goes on to describe the Singularity as: “an era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today”, and tells us that:

In this new world, there will be no clear distinction between human and machine, real reality and virtual reality. We will be able to assume different bodies and take on a range of personae at will. In practical terms, human aging and illness will be reversed; pollution will be stopped; world hunger and poverty will be solved. Nanotechnology will make it possible to create virtually any physical product using inexpensive information processes and will ultimately turn even death into a soluble problem.

This sounds awfully rosy, a very glowing example of Timothy May’s “Rapture of the Nerds” — there are a lot of people who think the Singularity will actually be a lot more chaotic, if not outright scary or unpleasant. But at least Kurzweil is reasonably specific about his predictions, unlike some other folks…

Futurist web site SingularityHub defines the Singularity, with a distressing lack of rigor or precision, as “the point in mankind’s future when we will transcend current intellectual and biological limitations and initiate an intelligence and information explosion beyond imagining.” Of course, “transcending current biological limitations” could easily refer to things like the development of powered flight by the Wright Brothers, or to the discovery of DNA, or the sequencing of the genome… or simply to Roger Bannister’s breaking the four-minute mile barrier. I’ll leave it to your imagination to think of how many things might be described by the fluffy and breathless phrase “an intelligence and information explosion beyond imagining”; suffice it to say that I find it no more useful than the first half of the sentence.

Perhaps the most honest definition: a quote from “Godling’s Glossary” by David Victor de Transend describes the Singularity as “A black hole in the Extropian worldview whose gravity is so intense that no light can be shed on what lies beyond it.”

Summing Up

So, depending on who you ask, the Singularity might mean any of:

  1. A time when technological progress goes so fast that we (meaning unaugmented humans who are alive at the time) can’t keep up with it;
  2. A time when when technological progress goes so fast that we people before it can’t predict it (or what comes after it);
  3. A time when machines become smarter than humans (which will probably cause 1 and/or 2);
  4. A time when humans merge with machines (which isn’t quite the same thing as 3… but would probably cause it);
  5. A time when SingularityHub declares, “Woo-hoo! The Singularity has happened!”

In the writing I plan to do on the topic, I’ll mostly be using senses 1 and 2. If I use “the Singularity” to mean something else, I’ll say so clearly and explicitly.

Post a Comment

Your email is never shared. Required fields are marked *