
Originally appeared at The Good Men Project as part of their original series 100 Words on Love.
Berkeley author Robert W. Fuller recently published his first novel The Rowan Tree.
Author, physicist, educational reformer, citizen diplomat, advisor to presidents, & futurist

Originally appeared at The Good Men Project as part of their original series 100 Words on Love.
Berkeley author Robert W. Fuller recently published his first novel The Rowan Tree.
I got a close look at the poison of “rankism” at the age of seven, when my classmate Arlene was sent to the hall for the whole school day. Arlene lived on a farm and wore the same dress to school each day. When she spoke, it was in a whisper. Our teacher, Miss Belcher, began every day with an inspection of our fingernails. One day she told Arlene to go to the hall and stay there until her fingernails were clean. I wondered how she could clean her nails out there, without soap or water. If there was no remedy in the hall, then the reason for sending Arlene out there must be to embarrass her and scare the rest of us.
Later, filing out to the playground, we snuck glances at her. She must have heard the snickering as we passed – hiding her face against the wall as I remember it, and trying to make herself look small. I told my mother what had happened to Arlene, and, as I must have hoped, she made sure the same thing didn’t happen to me.
Other kids whom my classmates regarded as safe targets for abuse included Frank, who was shamed as a “faggot”; Jimmy, who had Down’s syndrome and was ridiculed as “retarded”; and Tommie and Trudy who were teased about their weight. The N-word was used only warily, typically from the safety of the bus that carried our all-white basketball team home in the wake of defeat to a school that fielded players who were black.
Not belonging to any of the groups that were targeted for abuse, I was spared – until I got to college. There I realized that higher education was less about the pursuit of truth than about establishing another pecking order. I found myself caught up in games of one-upmanship, and was reminded of my classmates once again.
The toxic relationships described above are all based on traits that mark people out for abuse, whether in terms of class, sexuality, disability, body shape, color or academic standing. And even if you fall on the privileged side of these traits you can still be treated as a nobody by people who want to make themselves feel superior. I call this “rankism”, and it’s the cancer that’s eating away at all our relationships.
Emily Dickinson spoke about this problem in her “nobody” poem:
I’m nobody! Who are you?
Are you nobody, too?
Then there’s a pair of us – don’t tell!
They’d banish us, you know!
As she notes, nobodies look for allies, and stand on constant guard against potential banishment. As social animals, banishment has long been tantamount to a death sentence for us. It’s no wonder we’re sensitive to even the slightest of indignities.
Dignity matters because it shields us from exclusion. It assures us that we belong, that there’s a place for us, that we’re not in danger of being ostracized or exiled. Dignity is the social counterpart of love.
In a seminal work of the modern women’s movement, Betty Friedan wrote of “the problem without a name.” A few years later the problem had indeed acquired a name – it was “sexism” – and from then on women knew both what they were for (equal dignity and equal rights) and what they were against (indignity and inequality). That’s why pinning a name on any behavior that poisons relationships is the first step towards delegitimizing it.

The task confronting us today is to delegitimize “rankist” behaviors just as we are doing with other forms of oppression. That means all of us – you and me – giving up our claims to superiority. It means no more putting down of other individuals, groups or countries. It means affirming the dignity of others as if it were our own. Sounds familiar? It’s the “golden rule” of dignity which rules out degrading anybody else. When denigrating behaviors are sanctioned, potential targets (and who isn’t one at some point?) must devote their energy to protecting their own dignity. A culture of indignity takes a toll on health, creativity and productivity, so organizations and societies that tolerate rankism handicap themselves.
The cancer of rankism persists as a residue of our predatory past. But, for two reasons, the predatory strategy isn’t working any more. First, the weak are not as weak as they used to be, so picking on them is less secure. Using weapons of mass disruption, the disenfranchised can bring modern life to a stop. Humiliation is more dangerous than plutonium.
Second, the power that “dignitarian” groups can marshal exceeds that of groups that are driven by brute force and fear. When everyone has a place that is respected, everyone can work for the group as well as for themselves. “Dignity for all” is a winning strategy because it facilitates cooperation. Recognition and dignity are not just nice things to have, they are a formula for group success, and their opposites are a recipe for infighting, dysfunctionality and failure. If we can put the spotlight on rankism and purge our relationships of this poison, then not only we will spare people from humiliation, we’ll also increase the creativity of ourselves and our communities.
One of the sources of Lady Gaga’s fandom is that she’s a leader of the dignity movement. The kid who protests when one of his classmates is “nobodied” is another, all the more so if he or she is able to do so in a way that protects the dignity of the perpetrator. When victims of rankism respond in kind to their abusers, they’re unwittingly perpetuating a vicious cycle. The only way to end such cycles is to respect the dignity of the perpetrators while leaving no doubt that their behaviors are unacceptable.
In a dignitarian society, no-one is taken for a nobody. Acting superior – putting others down – is regarded as pompous and self-aggrandizing. Rankism, in all its guises, is uncool.
Our age-old survival strategy of opportunistic predation has reached its sell-by date. A vital part of our defense against this strategy is not to give offense in the first place. Going forward, the only thing as important as how we treat the Earth is how we treat each other.
Robert W. Fuller is an author and independent scholar from Berkeley, CA. His recent novel The Rowan Tree is now available as an audiobook at Amazon, iTunes, and audible.com. The Rowan Tree is also available in paperback as well as Kindle and other ebook formats.

Atomic energy? DNA? Penicillin? Or, something from the world of art or philosophy or psychology? The title question leaves plenty of room for debate.
My answer is that the most important learning of the century was disabusing ourselves of the notion that some people are inferior. Put the other way round, the most important misconception of the last century was the belief that some people were superior.
At the beginning of the 20th century, the existence of superior individuals and groups was widely accepted. Although there were some who disagreed, far more were eager to believe that their own kind were exceptional, and they were willing to degrade and exploit those whom they saw as their inferiors. Belief in the validity of such judgmental comparisons underlay much of the manmade suffering for which the 20th century is rightly known.
Well into the last century:
* Imperial powers believed themselves superior to the peoples they colonized and exploited.
* The doctrine of White Supremacy took many forms, including Jim Crow and Apartheid.
* Gentiles deemed Jews an inferior race.
* Ethnocentrism was the norm.
* The rich looked down their noses at the poor.
* Male supremacy and patriarchy were all but universal.
* Dominion over the Earth was defended as a God-given right.
* Co-religionists typically believed their faith superior to others.
* Heterosexuals regarded their moral superiority as self-evident.
* People with physical or mental disabilities were stigmatized.
* Native-born citizens felt superior to immigrants, and earlier immigrants felt superior to later arrivals.
* Traditional hierarchies of class and caste persisted. White collar workers looked down on blue.
* The academic world both mirrored and reinforced these valuations. Intelligence tests were regarded as certifying mental superiority and were used to justify consigning low-scorers to low-status jobs.
No doubt further examples will come to mind. But before going on, it is crucial to get one thing straight. I am not saying that differences do not exist or that performance cannot be judged, let alone that competition is bad. Of course some golfers are better than others, some musicians have more fans, some nations have higher income per capita, and some politicians outpoll their rivals.
What I’m saying is that ranking higher on a particular scale does not support a more general claim of superiority as a person. The winners of a race in a track meet are not superior human beings. If you insist, you can say that Mary was “superior” in the 400 meter dash on Saturday, but really all that means is that she crossed the finish line ahead of her competitors on that day in that event. The gold medal is her rightful reward, but it doesn’t mean that she’s a superior person. Larger, broader claims to superiority are unfounded, unseemly, and, as the 20th century amply demonstrates, treacherous.
The trouble with the superior/inferior distinction is that it’s used to confer or deny ancillary benefits, ones that go far beyond just rewards for winning a particular competition. Worse, claims to superiority are invoked to justify degradation, exploitation, and even the extermination of “inferior” individuals, groups, ethnicities, cultures, and peoples.
Because untold suffering has been licensed by presumed superiority, my nominee for the most important takeaway from the 20th century is the hard-won realization that applying the superior/inferior distinction to persons or peoples is specious. Such comparisons are odious. They present a grave danger not only to those deemed inferior, but also to those who pride themselves on their superiority.
This is not to say that imperialism, colonialism, exceptionalism, racism, sexism, ageism, ableism, homophobia, etc. have been eradicated. Hate-mongers and demagogues are constantly popping up and pandering to those who, doubting their own worth, hunger for assurances of superiority. American politicians, even those who know better, cultivate feelings of superiority by concluding their speeches with “America is the greatest country on Earth.” While such nationalistic puffery used to be music to patriots’ ears, it is increasingly cringe-making. To those who’ve come of age in a globalized world, exceptionalism rings false.
I can hear the objections already. Everywhere you look, some group, braced by a sense of its superiority, is demeaning or belittling those it regards as beneath it. Yes, such behavior persists into the 21st century, but increasingly it’s met with skepticism if not condemnation.
Here’s evidence of this change:
* Imperialism yielded to decolonization. The British, French, and others withdrew from Asia and Africa. Imperial designs of the Germans, Italians, and Japanese–intoxicated with their presumed ethnic superiority–led to the utter destruction of these would-be conquerors. The collapse of the Soviet Empire in the final decade of the century punctuated the end of empire.
* White Supremacy has become indefensible; the N-word unspeakable.
* Male supremacy and patriarchy are in retreat.
* Environmental protection and animal rights are gathering support.
* Homosexuality came to be seen as inborn, like heterosexuality. Lady Gaga’s hit–“Born That Way”–sums it up.
* Disabilities were de-stigmatized and people with disabilities laid claim to equal dignity.
* By century’s end, reflexive acceptance of entitlement and authority was out. Public skepticism, if not cynicism, toward anyone or any nation pretending to superiority was the new norm.
The hateful epithets that fell easily from people’s lips until mid-century have lost legitimacy; they embarrass not their targets but those who utter them. The ethnocentrism of 1900 now seems myopic. In its place is the idea that different cultures, like different languages, are simply different. Each is a complex social system with its own strengths and weaknesses. Ethnic or sectarian differences are not grounds for exploitation or predation.
One person is no more superior to another than a dachshund to a poodle, a dog to a cat, or a butterfly to a rose. Persons, groups, nations are incommensurate.
Individuals and groups react negatively to being labeled inferior, and sooner or later they will get even with those who abuse them. As Shakespeare slyly points out in The Merchant of Venice, the victimized, once they gain the upper hand, are usually inclined “to better the instruction.” To put it bluntly, condescension is a time bomb.
It cost millions of lives, but it seems to have dawned on us that a vital part of a good defense is not giving offense in the first place. What’s more offensive than claiming superiority for your religion or country, and expecting others to welcome your tutelage?
Postscript and Preview
Learning from the past is hard enough. Foretelling the future is impossible. Still, we must take the long view if only because a glimpse of where we’re headed can persuade us to change course to avoid a calamity.
So I conclude with another question and hazard another guess:
Which of the ideas that we now take for granted will do us the most damage over the course of this century? Or, putting it the other way round, for which of our delusions will our descendants most pity us?
To encourage you to formulate your own answer, I’ll give you mine.
The 21st century will reveal that, like superiority, selfhood is illusory.
What I’m suggesting is that there really are no separate selves. The word self is itself a misnomer. Autonomous, stand-alone selfhood is an illusion. Not only are we not better than anyone else, our selves are so entangled and enmeshed with other selves as to make individual selves indistinguishable. Separate selves, like superior selves, are a dangerous delusion.
Senator Elizabeth Warren pleased some and angered others when she pointed out that none of us can do anything by ourselves. That “it takes a village.” That’s an understatement. Actually, each of us is a village. We’ve been internalizing our “village” since our first stirrings in the womb.
Not only can no one do anything by him or herself, no self can even be by itself. To exist is to co-exist. Absent human interaction, minds do not develop or they break down. That’s why solitary confinement is torture. Our selves are either continually, communally co-created or they disintegrate.
During the current century we’ll have to reconceive our relationship to smart machines as their creative intelligence overtakes our own. Dealing with this humbling development will change our sense of self even more profoundly than the 20th-century realization that we’re not as special as we thought.
Reimagining human selfhood will take the combined efforts of philosophers, theologians, psychologists, neuroscientists, artists, and others. I’m sure that the answer I’ve broached here will give way to a succession of better ones. Coming to a new understanding of the relationship between individuality and collectivity–between self and other–and then reorganizing our social and political relationships accordingly will be the defining challenge and crowning achievement of the 21st century.
By 2100, we’ll have very different answers to the age-old questions: Who am I? Who are you? Who are we? Our new answers will cause us, in partnership with the intelligent machines we build, to remake the world.
An expanded version of this exploration of the future of the Self–and how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics–is available as a free e-booklet here.
This is the sixth and final post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.
We must believe in free will. We have no choice.
– Isaac Bashevis Singer
What Kind of Computer Is the Brain?
Computers can’t do everything humans do—not yet, anyway—but they’re gaining on us. Some believe that, within this century, human intelligence will be seen as a remarkable, but nonetheless primitive, form of machine intelligence. Put the other way round, it’s likely that we will learn how to build machines that do everything we do—even create and emote. As computer pioneer Danny Hillis famously put it, “I want to build a machine who is proud of me.”
The revolutions wrought by the Copernican and Darwinian models shook us because they were seen as an attack on our status. Without proper preparation, the general public may experience the advent of sophisticated thinking machines as an insult to human pride and throw a tantrum that dwarfs all prior reactionary behavior.
At the present time, there are many candidate models of brain function, but none is so accurate and complete as to subsume all the others. Until the brain is understood as well as the other organs that sustain life, a new sense of self will co-exist with the old.

The genome that characterizes a species emerges via a long, slow Darwinian process of natural selection. The menomes that characterize individuals also originate via a Darwinian process, but the selection is among neural circuits and occurs much more rapidly than the natural selection that drives speciation. That the brain can be understood as a self-configuring Darwinian machine, albeit one that generates outcomes in fractions of a second instead of centuries, was first appreciated in the 1950s by Peter Putnam. Though the time constants differ by orders of magnitude, Putnam’s functional model of the nervous system recognized that the essential Darwinian functions of random variation and natural selection are mirrored in the brain in processes that he called random search and relative dominance.
In 1949, Donald O. Hebb enunciated what is now known as the “Hebb Postulate,” which states that “When an axon of cell A excites a cell B and repeatedly and persistently takes part in firing it, some growth process or chemical change occurs in one or both cells such that A’s efficiency in firing B is increased.” Peter Putnam’s “Neural Conditioned Reflex Principle” is an alternative statement of Hebb’s postulate, and involves an expansion of it to include the establishment and strengthening of inhibitory or negative facilitations, as well as the excitatory or positive correlations encompassed in the Hebb Postulate. The Hebb-Putnam postulate can be summed up as “Neurons that fire together wire together.”
The reason replicating, or even simulating, brain function sounds like science fiction is that we’re used to relatively simple machines—clocks, cars, washing machines, and serial computers. But, just as certain complex, extended molecules exhibit properties that we call life, so sufficiently complexity and plasticity is likely to endow neural networks with properties essentially indistinguishable from the consciousness, thought, and volition that we regard as integral to selfhood.
We shouldn’t sell machines short just because the only ones we’ve been able to build to date are “simple-minded.” When machines are as complex as our brains, and work according to the same principles, they’re very likely to be as awe-inspiring as we are, notwithstanding the fact that it will be we who’ve built them.
Who isn’t awed by the Hubble telescope or the Large Hadron Collider at CERN? These, too, are “just” machines, and they’re not even machines who think. (Here I revert to who-language. The point is that who or what-language works equally well. What is uncalled for is reserving who-language for humans and casting aspersions on other animals and machines as mere “whats.” With each passing decade, that distinction will fade.
The answer to “Who am I?” at the dawn of the age of smart machines is that, for the time being, we ourselves are the best model-building machines extant. The counter-intuitive realization that the difference between us and the machines we build is a bridgeable one has been long in coming, and we owe it to the clear-sighted tough love of many pioneers, including La Mettrie, David Hume, Mark Twain, John von Neumann, Donald Hebb, Peter Putnam, Douglas Hofstadter, Pierre Baldi, Susan Blackmore, David Eagleman, and a growing corps of neuroscientists.
Yes, it’s not yet possible to build a machine that exhibits what we loosely refer to as “consciousness,” but, prior to the discovery of the genetic code, no one could imagine cellular protein factories assembling every species on the tree of life, including one species—Homo sapiens—that would explain the tree itself.
The Self Is Dead. Long Live the Superself.
The generalization of the self-concept to the superself is unlikely to receive a reception much different from that accorded Twain’s What Is Man?.
The co-creation characteristic of the superself will be scorned as collectivism, if not socialism. Reciprocal dignity will be ridiculed as utopian. Asking “What am I?” instead of “Who am I?” will be dismissed as reductive, mechanistic, and heartless.
Although the superself incorporates the witness, and so has a religious provenance, it’s fair to ask if it will ever speak to the heart as traditional religious models have done. It’s not easy coming to terms with life as a property of inanimate matter, arranged just so, and it will likely be even more difficult to accept ourselves as extended, self-conscious, willful machines.
Many will feel that this outlook is arid and bleak, and want to know: Where’s the mystery? How about love? Doesn’t this mean that free will is an illusion? Awe and wonder and the occasional “Eureka!” may be enough for science, but religious models have offered fellowship, absolution, forgiveness, salvation, and enlightenment. People of faith will want to know what’s holy in this brave new world.
The perspectives of religion and science on selfhood, though different, are not incompatible. Without oversimplifying or mystifying either, it’s possible to identify common ground, and, going forward, a role for both traditions. I propose such a collaboration in Religion and Science: A Beautiful Friendship?.
My guess is that once we’re in the presence of machines that can do what we do the model of selfhood we’ll settle on will be even more fecund than the traditional one. That co-agency replaces individual volition will not undermine a sense of purpose, though it will require a redefinition of personal responsibility. There’s no reason to think that machines that are sophisticated enough to outperform us will evoke less wonder and reverence than organisms that have arisen via natural selection. Mystery does not attach itself exclusively to human beings. Rather, it inheres in the non-human as well as the human, in the inanimate as well as the animate. As Rabbi Abraham Heschel notes, “Awe is an intuition of the dignity of all things, a realization that things not only are what they are but also stand, however remotely, for something supreme.”
Contrary to our fears, the capacity of superselves for love, fellowship, and agency will be enlarged not diminished. As the concept of superself displaces that of individual selfhood, the brotherhood of man and its operating principle—equal dignity for all—become self-evident and self-enforcing. Nothing in this perspective bars belief in a Deity for those so inclined. Having said that, it’s implicit in this way of beholding selfhood that if there were a God, He’d want us to behave as if there weren’t. Like any good parent, He’d want to see us wean ourselves and grow up.
The superself, with its inherent co-creation and co-agency, not only transforms our relationships with each other, it also provides a new perspective on death. As mentioned, it’s arguable whether selves survive the death of the bodies in which they’re encoded. But, survivability is much less problematic for superselves. Why? Because they are dispersed and so, like the Internet that was designed to survive nuclear war, provide a more redundant and robust defense against extinction. As William Blake noted three centuries ago:
The generations of men run on in the tide of Time,
But leave their destin’d lineaments permanent for ever and ever.
In the same sense that the soul is deemed to survive the death of the individual, the wenome survives the disintegration of the body and the mind. The absence of a particular individual, as defined by a unique genome and menome, puts hardly a dent in the wenome. The building blocks of superselfhood can be thought of as genes, memes, and wemes. All three encodings are subject to evolutionary pressure.
Although some may feel this reformulation of selfhood asks them to give up the store, it will gradually become apparent that it’s only the storefront that requires a do-over. To give up standalone selfhood in exchange for a open-ended leadership role in cosmic evolution is a trade-off that many will find attractive.
As Norbert Wiener, the Father of Cybernetics, wrote in 1949:
We can be humble and live a good life with the
aid of machines, or we can be arrogant and die.
Robert W. Fuller is an author and independent scholar from Berkeley, CA. His most recent book is The Rowan Tree: A Novel.
This is the fifth post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.
“What Is Man?” is the title of a little book by Mark Twain. He held it back for twenty years because he knew the public would hate it. The “what” in the title foreshadows its discomfiting message.
Twain broke with the tradition of asking “Who Am I?” and its species-wide variant “Who Is Man?” on the grounds that a “who-question” is a leading question. It predisposes us to expect the answer to be a sentient being, not unlike ourselves, “whom” we’re trying to identify.
Twain’s answer was that Man is a machine, and he was right about the public reception accorded his thesis: the twentieth century was no more ready for Mark Twain’s mechanistic perspective than the eighteenth had been for Julien Offray de La Mettrie’s metaphor of “Machine Man.”

At the dawn of the twenty-first century, Twain’s answer is no more popular than it was with his contemporaries. But recent research has produced a growing awareness that Mark Twain, while he may have been a killjoy was, as usual, ahead of his time.
Twentieth-century science has shown that humans, like other animals, function according to the same principles as the cosmos and everything in it. The Hindu seers who proclaimed, “I Am That” were onto something. Man does not stand apart from the rest of the cosmos. He is made of the same stuff and governed by the same laws as everything else. The gap between “I” and “That” does indeed seem to be narrowing.
As curmudgeons like Twain have delighted in pointing out, Man is in fact quite unexceptional. We do not live at the center of the universe: Copernicus and Galileo pointed out that it does not revolve around us. Humans are just one of many animals: Darwin, Wallace, and others placed us, kicking and screaming, in the company of apes. But, having eaten several servings of humble pie, surely no one will take it amiss if we allow ourselves one small brag.
Although not exceptional in ways we once believed, we are exceptionally good at building tools and machines. And that includes machines that do what we do. Machines that dig, sow, and reap. Machines that kill and machines that save lives. Machines that calculate, and, projecting, machines who think. Our brains will soon be viewed as improvable, constrained as they were by the stringent conditions of self-emergence via natural selection, gestation in a uterus, and birth through a baby-sized aperture in the pelvis.
No higher intelligence seems required to create life, including human life. What we revere as life is “just” a property of a handful of chemicals, RNA and DNA holding pride of place among them. But, that’s not a bad thing, because if we’ve come this far without intelligent design, the sky’s the limit once we lend our own inventiveness to the evolutionary process.
This has long been foreseen, but never accepted. Once we get used to it, this perspective will enable us to reduce suffering on a scale only dreamt of. Why? Because the lion’s share of human suffering can be traced to false self-conceptions. The indignities that foul human relationships, at every level, from interpersonal to international, stem from a model of autonomous selfhood in which self is pitted against self.
Rather than masking the indissoluble interconnectedness of selves—as the notion of individual selfhood does—superselfhood embraces it. It’s not just that we can’t do anything without help; we can’t even be apart from continual imitation. Entropic forces disintegrate any identity that is not shored up through a mimetic process of mutual recognition. Since mimesis is distorted and undermined by indignity, reciprocal dignity gradually, but ineluctably, displaces opportunistic predation as a strategy for optimizing group efficiency and productivity. As a source of inefficiency, malrecognition—with all its attendant dysfunctionality—will be rooted out much as we made it our business to combat malnutrition once we understood its toll.
Martin Luther King, Jr. gave expression to this emergent morality when he wrote: The arc of the moral universe is long, but it bends toward justice.