On a knife's edge

There's a saying that I'm having difficulty tracking down. One version attributed to Pratchett and Gaiman reads "Civilization is twenty-four hours and two meals away from barbarism." but there are versions that predate that, such as "mankind has always been only one generation away from barbarism". I think there was also a version cited on "Red Dwarf". Regardless of the precise details, I tend to agree, falling much closer to the "two meals" side of things.

However, I'm reasonably confident that, should we avoid slipping into barbarism, we're less than 50 years away from leaping off into the uncharted territory of near-god-dom, masters of technology and information that will let us reshape the world(s) around us, or let us abandon them altogether.

Isn't that a terrifying thought? That we're walking on a knife's edge, tantalizingly close to nirvana, but a single slip could doom us?

Step 1: Read Iain M. Banks - he has a hard SF series about "The Culture", a galaxy-spanning civilization where, effectively, the members are unlimited in what they can do, only in what they want to do. There is rollicking good fun in "Excession" and "Look to Windward", where super-intelligent starships deal with interesting events in the galaxy. On the other hand, many of the novels deal with the interaction between members of the Culture and other civilizations. The others are often portrayed as barbaric. I recommend "Consider Phlebas" and "Use of Weapons". But only if you're in a good mood. Many writers (Tad Williams comes to mind) delight in dragging their protagonists through the mud to build sympathy and depth, and do it with such vivid depiction that you feel angst for them. Banks pulls even fewer punches, and you will almost certainly feel visceral pain for the characters. In fact, you might just feel sick to your stomach.

Yes, it's nominally science fiction. But when you cringe in disgust at the torture inflicted on a character, you can't say "that doesn't happen in the real world!". Rather, it reinforces just how barbaric humans are under the thin veneer of civilization. In the same books you can see the splendor of a civilization of near-gods, and the worst of attrocities being inflicted.

Step 2: Read Vernor Vinge. This is a tricky one, since he's written lots of good stuff. He wrote about cyberspace before William Gibson named it that. And the important thing to get out of Vinge is something he necessarily has to skip in his novels - the Singularity. The short definition: human culture and technology is not stagnant; it is changing at an ever increasing pace. Today, circa 2005, we can cope with this change - we can effectively plan for what will happen in 2006. Unlike our ancestors, though, we have to acknowledge that the future will not be the same as the present - our plans for, say, 2010 have to accomodate the prediction that prices for basic necessities will have increased, computers will be faster, more services will be available online, etc. Just as (most) writers in the 1950's did not predict networked personal computers, there will be things we can't predict, but we won't be completely wrong. Just a little wrong. So our plans for the future will hold together.

At some point, the rate of change will increase until predictions fail. One way this might occur is that communication and collaboration abilities for humans will increase until new inventions are occuring and reshaping the world so quickly that we can't keep up. More likely, we succeed in constructing intelligent computers, which at first near, then later match, then quickly surpass human intelligence. These will quickly change the world in ways we can't imagine. This is called the Singularity - the point past which predictions fail, by analogy to black holes.

In much of his fiction Vinge either has to make a Singularity impossible by tweaking the laws of physics (in "Deepness in the Sky" and "Fire Upon the Deep" the laws of physics conspire against trans-human intelligence) delaying it (in "The Peace War" a war sets back civilization a few hundred years), or by skipping over it (in "Marooned in Realtime" the characters were playing Rip van Winkle).

Once you think about this, nearly any other science fiction seems on shaky ground. Take Star Trek, as a particularly bad example - it takes a good several seconds for the Enterprise to fire weapons in combat, simply because the captain has to make a decision, state an order, and someone has to press a button. Or hand-to-hand combat, where the good guys and bad guys trade shots with phasers. Contrast that with any contemporary combat computer game, where the enemy A.I. has to be dumbed down significantly so the vessels or soldiers don't significantly out-class the humans. We're not that far along in the real-world, but we're getting close.

Step 3: "But AI has always failed in the past!" will be cry, among SF afficionados and even among computer scientists and general geeks. So visit Hans Moravec's site http://www.frc.ri.cmu.edu/~hpm/ and read his papers, especially "Rise of the Robots". It becomes quickly clear that past predictions were made my people who vastly underestimated just how much raw processing power the human brain has. We do far much more than simply push symbols around; neural tissue is a vast parallel processing substrate. Our current baseline computers (1GHz) are barely capable of matching the processing power of an insect or the human retina. But, surprisingly, that's about how capable they are. Circa 2004 we have rovers on Mars that can just barely navigate autonomously and do basic terrain and do extremely simple realtime digital image processing. Based on Moore's Law we're only 30-40 years away from having human-class processing power in off-the-shelf computers.

Still skeptical? That's fair. But state which of the predictions you don't buy into. And then you should be terrified.
  • A.I. won't work for unspecified reasons - like there is some important component like "the soul". I'd recommend reading "The Mind's I" by Hofstaeder and Dennett, or any other contemporary book on philosophy and neuroscience that disects this notion. 
  • Moore's Law will fail - I'll buy this one. After all, it's just an observation of human technology trends, not a law of nature. However, our current culture is predicated on ever faster, ever cheaper, ever more capable technology. If this really putters out, I predict that civilization will take a big, hard reset. Circa 2004 we were still suffering the econimic ripple of the Dot Com burst from 1999, and only beginning to recover in 2005. Also, we know from empirical evidence (e.g. six billion humans) that the universe does support human-capable processing power in small, self-powering boxes about a cubic meter in volume, given inputs of air and hydrocarbons. They're just not organized efficiently.
  • Human organization capability has limits - e.g., A.I. is physically possible, but humans aren't smart enough to actually build one. This presupposes (1) we will NEVER succeed, ever, in the lifetime of the species and/or universe and (2) there aren't other ways. (1) is possible, and terrifying - that we're going to hit the limit of what we can do and just stagnate. We're no-where near that yet, though. (2) has obvious answers - scan a human brain into a computer and emulate it at the cellular level. Then simultaneously crank up the emulation speed and eliminate redundancy (e.g. optimize neurons and subsystems). 
  • We stagnate. Say, it gets to be around 2030 and we just stop inventing new technology. Hard to imagine - we have a huge backlog plans for things that are physically and technologically feasible that we just haven't built yet, like space elevators. Even without any fundamental breakthroughs, there are so many "we know it's possible we just haven't gotten around to it" things it is mind boggling. So maybe we "go green" and turn away from technology. Also possible, but hard to imagine that all 8 billion people (by 2030) in hundreds of countries agree to never build machines again. And all that means is that we stagnate... until someone builds one more new machine. Or...
  • We slip backwards. There could be a global war that wipes out humanity completely, or just enough of humanity/civilization that it takes a while to get back to where we are now.
While many of those seem like possible outcomes, they all seem terrifying to me, either because of what they say about humans, the universe, or what life will be like OVER THE NEXT 50 YEARS.

That's right - this is what we have to look forward to IN OUR LIFETIMES. We're running smack dab into Singularity Theory, as it's called, in the real world.

Step 5: Read Greg Egan. I recommend starting with "Permutation City" and "Diaspora", followed by the short stories in "Axiomatic" and then "Schild's Ladder".
More to come...

Comments