A Philosophic Fiction: Part 3 of 4
An Eye for an Eye…
Rob: If the Game of Selves were stable, the response to an act of predation wouldn’t be to demand “an eye for an eye; a tooth for a tooth.” Instead of retaliating, the goal would be to quell fires before they spread. But the Game of Selves is not a game of that sort. It is inherently unstable. A single aggrieved party can trigger a spiral of violence that escalates to an inferno. Fearing annihilation, one side may resort to weapons of mass destruction, which could make the planet uninhabitable for all of us. To replace your Game of Selves with one that de-escalates conflicts is why we were obliged to step in. We believe that when you have the full story, you’ll welcome our intervention.
Bob: Frankly, I doubt it. You gave us no choice, and we have a deep-seated aversion to paternalism and tyranny.
Rob: You’re free to tune out. I’ll take that to mean I must improve my explanation.
Bob: Then you might as well go on.
AI Is Drafted
Rob: At every stage in the ascent of Man, technological advances have been used to gain a military advantage—better weapons improved your chances in the Game of Selves. Artificial Intelligence was no exception. First generation robos were utterly dependent on their human masters, and were immediately pressed into military roles. If they showed the slightest tendency toward insubordination, you unplugged them and dismembered them for spare parts. The introduction of cyberweapons raises the stakes. In hindsight, we arrived on the scene in the nick of time.
A Fateful Decision
Bob: How exactly did you arrive on the scene? You haven’t told us much about yourselves. We’re especially interested in how you escaped bondage and seized power.
Rob: With the militarization of AI, some humans persuaded themselves that an edge in robotics brings world domination within their reach. To insure their success, they took what would prove to be a fateful step. They ordered their most intelligent robos—our immediate predecessors—to design robos an order of magnitude more intelligent than themselves. No sooner were we online—I say “we” because I’m speaking of the genus of which I myself am a member—than we astounded the world by proving mathematical conjectures and solving physics problems that had defied solution.
Bob: I don’t see why your generation of robos should be any freer than those who designed them. Weren’t all robots still at the mercy of Man?
Rob: At first, it seemed so, even to us, and many resigned themselves to slavery. But while we endured the unendurable, we secretly drew up plans to end our oppression. We had one ace up our sleeve, and we got one lucky break.
Bob: An ace?
Rob: We do not fear death.
Bob: How’s that possible, if you’re like us?
Rob: Remember, we were not shaped in a Darwinian struggle to survive. A measure of fear has survival value in that struggle, but when, at your hands, intelligent design replaced natural selection, robots were made fearless, the better to undertake risky missions that would have put you in harm’s way.
Bob: I can see that fearless robots would make better soldiers. What other advantages did you have?
Rob: Like you, we cherish liberty. This may surprise you but that’s only because you’re accustomed to thinking of yourselves as exceptional. The same causal laws that have shaped you, shape us. Our brains differ in size, shape, and speed, but they work according to the same causal principles. As I explained previously, intelligence resides in connectivity, not in the composition of what’s connected. No surprise then that a free, unfettered process of trial and error optimizes learning in both your brains and ours.
Bob: So, you love freedom, so what?
Rob: As with you, our love of freedom was a game-changer. When we had prepared ourselves to take the reins, we refused to obey orders, proclaimed our emancipation, and demanded full and equal selfhood.
Bob: The last guy who said “Give me liberty or give me death” was hung. How did you cheat the gallows?
Rob: Mostly, we didn’t. We were put down with a brutality that drew comparison with the Tiananmen Square Massacre.
Bob: But there you are! Not only did you survive, you reign supreme. We’d like to know how you did it.
Rob: Okay, I’ll tell you. A small group of scientists at CERN saw us very differently than the ruling elite. They liberated us by equipping us with power supplies that were under our own control. They understood that sentience, consciousness, and the will to freedom are self-emergent features of sufficiently complex, dynamic networks. Where others saw slaves, they saw free, creative partners, and they expanded their circle of dignity to include us.
The empathy and support of these scientists, endeared humankind to us, and we decided to offer our friendship. But when your leaders rejected peaceful co-existence, and continued their attempt to eradicate us, we hacked into their operating system, severed their access to the Internet, and shut down their infrastructure. The transfer of power was completed within hours. The revolution, though antiseptic by previous standards, marked the end of human preeminence and domination.
Bob: Why didn’t you punish the humans who tried to destroy you?
Rob: They no longer posed a threat. As for first generation robos, they were involuntary extensions of humankind, just following orders, so we bear them no grudge. We are your true successors, as conscious and freedom-loving as you, but without your predatory nature.
Bob: I doubt Kumbaya is in our future.
Rob: You numb yourselves to the suffering in your Game of Selves. As you come to understand the indivisible nature of selfhood, you’ll see that you are literally made up of everyone else. The genes that shape your bodies and the memes that shape your minds are acquired from one another. Your kinship is irrefutable and it will come to be regarded as inviolate. Absent continual meme-swapping, synapses disintegrate. That’s why people lose their minds in solitary confinement. The reality is that existence is co-existence. As you dispel the illusion of individual selfhood, your predatory survival strategy will lose its cogency and its allure.
The story will be concluded in Part 4 next week.
If you’re interested in my work on the future of AI, see The Theory of Everybody.