1948 – NEVER
A Singularity event is one that permanently changes humanity: the invention of agriculture, written language, and democracy are all major inventions that forever altered the human condition. Are we due another such Singularity event this century?
For as long as humans have wandered the earth, our mortality has been front and center in our long list of woes. In every culture, in every age, many people have attempted to cheat death, one of the most famous examples of which includes Qin Shi Huang, king of the Chinese state of Qin in the third century BCE. Obsessed with living forever, he ordered his alchemists and physicians to concoct an elixir of life. They obliged and presented him with what they believed might grant him eternal life. Unfortunately for Qin Shi Huang, what they gave him was a handful of mercury pills, and he died upon consuming them.
We’ve come a long way since Qin’s day, so much so that immortality — or at least unprecedented longevity — appears increasingly plausible sometime this century. Inventor and futurist Ray Kurzweil seems so sure of it that he allegedly takes upwards of 200 dietary supplements a day to forge a “bridge to a bridge” when long life is the norm. The May 2013 issue of National Geographic, in fact, features this very topic.
For now, however, they say we die twice: once when we take our last breath, and again when our name is uttered for the last time.
Our greatest literature, both ancient and modern, seems to confirm this attitude. Countless examples suggest that as much as we strive to achieve everlasting life, death is our inescapable fate. To seek a loophole is folly and smacks of the worst kind of hubris. The earliest such tale, over twelve thousand years old, relates the ancient Mesopotamian king Gilgamesh’s quest for everlasting life following the death of his friend Enkidu. Although Gilgamesh ultimately fails in his undertaking, he achieves a sort of immortality in the minds of his people as a result of his heroic exploits. The same arrogance is seen in the character of Greek demigod Achilles, who was said to be impervious to harm in all parts of his body except his ankle, which his mother Thetis failed to immerse in the river Styx. Near the end of the Trojan War, he is slain by the lethal accuracy of Paris’s arrow, but Achilles’s courageous feats guarantee that his name lives on into perpetuity.
For those of us who lack the godlike strength and derring-do of Gilgamesh, Achilles, Heracles and other ancient and Classical heroes, the only hope we have at gaining immortality is through emerging age-reversing technology and research into the human brain. Our two leading options appear to be an indefinite halt to the aging process or a sort of digital resurrection — uploading our minds into vast computer servers. But are either of these options desirable?
The former option, the perpetuation of our corporal bodies, seems at this point to be more scientifically plausible but far less satisfactory. Many stories warn of the dangers of unnaturally extending the shelf-life of our flesh and bones. The legend of the Wandering Jew, for instance, convinces us that everlasting life is a curse, a waking nightmare that results only in unfathomable despair and desperation. According to the legend, the old man scours the world seeking someone who will exchange his mortality for his cursed immortality. For two centuries now, Mary Shelley’s gothic novel Frankenstein; or, the Modern Prometheus has terrified readers with the personal, societal and religious implications of reanimating dead tissue. Alphaville’s 1980s anthem of youth “Forever Young” rejects the notion of immortality for its own sake:
It’s so hard to get old without a cause
I don’t want to perish like a fading horse
Youth’s like diamonds in the sun
And diamonds are forever
Forever young, I want to be forever young
Do you really want to live forever, forever and ever?
What’s the use of everlasting life, Alphaville argues, if we can’t maintain a youthful spirit? Better to die with a hopeful eye on the future than to trudge meaninglessly though eternity.
Poets routinely insist that the only fulfilling way for us to achieve immortality is through our art and innovations. In Shakespeare’s “Sonnet 18,” the speaker promises a youth or possible lover that “thy eternal summer shall not fade, / … Nor shall death brag thou wander’st in his shade.” Because he has composed the sonnet in her honor, her memory will last for as long as the poem exists: “So long as men can breathe, or eyes can see, / So long lives this, and this gives life to thee.”
Of course, there are just as many counterarguments to the idea that art leads to eternal life. Romantic poet Percy Shelley’s poem “Ozymandias” tells of a wanderer who comes across a “lifeless,” eroded statue in the desert, whose pedestal reads:
My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!
Despite the once-grandness of the statue, “Nothing beside remains. Round the decay / Of that colossal Wreck, boundless and bare / The lone and level sands stretch far away.” Even this mysterious king’s exploits and fame – whatever they might have been – couldn’t save his memory from the ravishes of time. Not only has he died the first time but, as evidenced by the wasteland of his forgotten realm, the second time as well. American filmmaker Woody Allen echoes this sentiment: “I don’t want to achieve immortality through my work. I want to achieve it through not dying.”
But the question remains — is not dying desirable?
If most of us one day have the opportunity to extend our lives indefinitely, how will that change the dynamics of society and culture? A typical person living to 80 years of age goes through several dramatic changes in his lifetime: his opinions and attitudes change, his interests, his friends, his career, sometimes even how he remembers the past. Imagine how much change would take place in a thousand years of life! You wouldn’t be a shadow of the person you once were. Some workers put in 30 or 40 years’ worth of service at a single company or organization, or work in a single industry for as many years, but how dull it would be to continue beyond that. We celebrate when couples reach fifty years of marriage, but could any of them reach 100 years? Two hundred? A thousand? A little over half of marriages end in divorce already. Would couples, knowing that they are going to live for hundreds of years, wed with the firm understanding that they will eventually split? How would immortality affect patriotism?
Let’s pretend for a moment that the Wandering Jew really exists. For close to two thousand years, he has shuffled down countless roads, cane in hand, trying to find some fool to take his place. He clearly cannot be the same person now as he was during the time of the Romans. He’s seen far too much and met far too many people to hold on to whatever prejudices he once had. What “science” he might have believed as a young man has since been obliterated. The language he spoke for centuries, Aramaic, will soon die out. His ancient brand of Jewish is no longer. He claims no country as his own. Having lived to be two thousand years old, he has seen the rise and fall of dozens of nations and empires. He has come to realize the arbitrariness and fragility of borders as well as tribal and national pride.
Leaving aside the unpleasantness of experiencing eternity as a decrepit old man and being charged with the impossible task of giving away your decripitude, what is it about immortality that attracts people so? As Caesar declares in Shakespeare’s play:
The second option to immortality involves uploading our minds onto computer servers, a solution advocated by thinkers such as Kurzweil and Dmitry Itskov. Doing so would immediately eliminate many of the problems outlined above. You need not age in a digital landscape, for one thing. And since you’re whole existence amounts to lines of computer code, you could conceivably “program” yourself to avoid feeling depression, sadness, doubt and other negative emotions.
But there are other problems in this scenario.
If we upload our minds onto computers, we can “live” for as long as we wish, or as long as the data remains properly archived and resistant to fragmentation, viruses and hacking. After all, the official Space Jam website hasn’t aged a day since it launched back in 1996. But even if every last facet of our memories, temperament, interests, dislikes and habits carry over into the merry old land of ones and zeros, are the digital copies really “us” — the essential us — or simply clever simulations? What’s lost, if anything, in the transfer from a carbon-based world to a silicon world? Perhaps the earliest available opportunities to experience immortality will be faulty and disastrous, resulting in regretfully botched versions of our psyches.
Let’s say you upload your mind today. Now there are two “yous,” the analog you and the digital you. After your analog self dies, your digital self “lives” on. It will no doubt continue to assert that it is just as “real” as you ever were because it has the same memories, the same personality, the same tics and religious beliefs and tastes in women (or men, or both). Otherwise, how can it claim to be you? One of the problems here, if indeed there is one, is that you — the meat sack version — won’t survive to enjoy the immortality you’ve passed on to this immaterial copy of yourself.
Is “good enough” simply not good enough?
We place such a high premium on authenticity. Even if the digital copy of yourself is identical in every possible way, it’s still not the “you” that emerged from your mother’s womb. The same argument can be made with regard to art forgeries, some of the best of which are sold at auction as the real deal. Shaun Greenhalgh, possibly history’s most successful art forger, was so good, he managed to dupe both casual and expert art enthusiasts for years and make close to a million pounds before being caught. Anyone who has one of his remarkably convincing pieces sitting in their house — one of his Rodin knockoffs, for instance — is reasonably entitled to tell visitors that they do indeed have a Rodin. There’s nothing about the piece that gives away its deception, other than the abstract notion of its inauthentic origin. But for most people, that’s enough. No matter what the piece looks like, either Rodin sculpted it with his own hands or he didn’t. Similarly, no matter how convincingly “real” a digital life might be, there are those who would refuse such a life because it lacks the nebulous idea of authenticity.
Of course, like Greenhalgh’s Rodin piece, and as we’ve already discussed, there’s no certifiable way to disprove that what you think is reality is actually a fraud. How do you know you’re not already living in a sophisticated computer simulation right now?
Gilgamesh and Qin Shi Huang’s quest for everlasting life might come to a close sometime this century. Before that happens, however, we must discuss the implications and consequences of a world in which death is no longer certain. Emily Dickinson, abandoning the desire to live forever, muses: “That it will never come again is what makes life so sweet.” Since immortality will surely become a reality, we must reassess the sweetness in life.
“We are an equal opportunity employer and do not discriminate against otherwise qualified applicants on the basis of race, color, religion, national origin, age, sex, veteran status, disability, cybernetic augmentation or lack thereof, or any other basis prohibited by federal, state or local law.”
Most of us are accustomed to seeing this equal opportunity clause when we’re filling out job applications — so much so, in fact, that our eyes tend to skim right over it. Chances are, you’ve seen it so often that you completely ignored the first paragraph. But if you go back and read it carefully, you can see what the equal opportunity clause might someday look like.
Yes, you read it right. Get ready to work alongside cyborgs at the office, the shop and the warehouse. Get ready to send your kids off to be taught and babysat by cyborgs. Get ready to engage in water cooler banter with cyborgs, collaborate with cyborgs, attend power meetings with cyborgs and carpool with cyborgs. Get ready to watch laughably sterile corporate videos at your workplace on how to prevent cyborg-discrimination and what to do if you suspect that it’s occurring.
Because inevitably the next major labor rights movement — here in the US and elsewhere around the world — will involve cyborgs in the workplace. To protect them from being denied employment as a result of their modifications, new anti-discrimination laws will need to be passed. Cybernetic implants such as what cyborg-activist Neil Harbisson wears on a regular basis are out of the ordinary, draw attention to their wearers and therefore might alarm potential employers.
Employers might worry, understandably so, that the technology will be used for ulterior purposes other than what the wearer alleges it’s for, lead to workplace rivalries and disputes, create distraction or drive away clients. Let’s be honest here. Not many employers would be too keen on having someone who wears as much hardware as real-life cyborg Steve Mann does work the cash register.
Steve Mann, an inventor and professor at the University of Toronto, is the perfect example of why such laws will be necessary. In July 2012, Mann was physically assaulted in a Paris McDonald’s by one of its employees presumably because the assailant didn’t appreciate his odd appearance. Cyber-hate crimes such as this will surely become more common in the workplace and elsewhere.
Baseline humans who choose to remain cyber-free, or who can’t afford the technology, will also need to be protected, for the opposite reasons. Because they lack whatever skills or enhancements cybernetic humans are granted through wearable or surgically-embedded technology, employers might hesitate to hire them for or promote them to important positions. Let’s say you manage a group of market research analysts. Who would you be more tempted to bring onto your team: a brilliant baseline Harvard graduate? Or a cyborg who has undergone a procedure that boosts his brain’s calculating power to supercomputer levels?
To establish workable, enforceable anti-cyborg-discrimination laws and policies, many questions will first need to be answered.
The most obvious question: what is a cyborg exactly? Generally speaking, a cyborg is a human who has been modified or augmented with some sort of computer, robotic or cybernetic technology. Using this definition, a cyborg is not built from scratch in a manufacturing plant, factory or lab as a robot might, but instead conceived through the union of a human egg and sperm cell. Androids, which are nothing more than sophisticated humanoid robots, probably will not be protected under any sort of anti-discrimination laws — at least not until they are sophisticated enough to demonstrate human-like emotions and self-awareness. Because of advances in artificial intelligence, robotics and the reverse-engineering of the human brain, this looks more and more feasible.
Even so — assuming that we can one day manufacture an android to resemble a human in every conceivable way, to say nothing of why we would ever have the need or desire to create such a being — it’s unclear whether the law would differentiate between a cyborg and android where labor rights and discrimination in the workplace are concerned. If a corporation can gain personhood status and enjoy certain legal rights and protections, why can’t an android? Would it be cruel and unlawful to make an android work around the clock, even if it showed no signs of fatigue?
When does a human become a cyborg? Where’s the line? Are people with pacemakers, hearing aids and electro-hydraulic prosthetic limbs cyborgs?
Right now, owning and using “distracting” wearable computing such as Google Glass isn’t protected by the law because doing so is a lifestyle choice, sort of like having excessive tattoos, which also precludes enthusiasts from certain occupations (though these attitudes are quickly changing). But over the coming years, cybernetic implants and augmentation will increasingly become ubiquitous, available in all flavors and degrees of performance. The more these technologies are accepted and used by a majority of people, for a great number of everyday tasks, the less they will seem like a choice. Instead, they will be viewed as essential tools to maintaining a “normal,” productive life, the same as an automobile, computer or phone. Even though it’s possible, most of us cannot do without a phone of some kind — smartphone or otherwise — and for this reason, the only choice in the matter is what brand of phone to buy and service provider to contract with.
And yet, in 1876, a Western Union internal memo scoffed at the idea that people will have need for them: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”
Or consider this 1943 comment made by Thomas Watson, then-chairman of IBM, who doubted the pervasive need for computers: “I think there is a world market for maybe five computers.”
Or this one by Digital Equipment Corp. founder Ken Olson, as recently as 1977: “There is no reason anyone would want a computer in their home.”
In 1899, the great Irish physicist and engineer William Thompson, Lord Kelvin — who developed the precise value of absolute zero, among other scientific contributions — strung together a staggering list of boneheadedly inaccurate predictions: “Radio has no future. Heavier-than-air flying machines are impossible. X-rays will prove to be a hoax.”
Wrong. Wrong. Wrong.
And so it will be with cybernetic implants and augmentations. Most people now doubt that such things will possibly become mainstream, but as we’ve seen again and again, exciting new technologies tend to fill lifestyle gaps we never knew existed.
Workers in the US are protected in a number of ways. But if employers are required not to discriminate against those with a certain religious preference, which is very much a lifestyle choice, unlike age, sex and race, then perhaps cyborgs will one day have their rights addressed as well.
I believe that being a cyborg is a feeling, it’s when you feel that a cybernetic device is no longer an external element but a part of your organism. ~Neil Harbisson
We know why rain falls from the sky and how distant stars are born. We know the exact height of our planet’s tallest peak and the depth of its deepest ocean. We know that all the world’s landmasses split asunder eons ago from one super-continent and that human beings share a common ancestor with apes. We know why whooping cranes migrate, why salmon swim upstream, and why bats hang upside down. We know that the planet Mercury’s core accounts for about 42 percent of its volume and that the surface temperature of Neptune’s moon Triton plunges to as low as -234 degrees Celsius. We know how to split the atom and unleash unimaginable carnage.
Taking into account all the discoveries we’ve made over the past 2,000 years, it’s amazing that what we know least about is, well, us — specifically, the human brain or, as President Barack Obama describes it, the “three pounds of matter that sits between our ears.”
But that will soon change. (And by “soon,” we mean sometime within the next decade.) The president recently unveiled details of an ambitious new plan to map the human brain. According to the White House’s website, this $100 million undertaking might lead to much-needed benefits such as better treatments or even cures for neurological and emotional disorders, including Parkinson’s, PTSD, traumatic brain injury and bipolar disorder. Although much further in the future, the research might also lead to some sort of advanced human-computer language interface.
The official title for the project is — take a deep breath — the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Its ultimate goal is to “produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space.” Furthermore, it aims to determine how exactly “the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.”
This is exciting news indeed, as the NIH Brain Initiative could very well end up being Obama’s Apollo 11 moon landing or Human Genome Project — to name only two similarly bold, landmark scientific and exploratory projects pushed by Presidents Kennedy and Clinton. Basically what we’re talking about here is reverse-engineering the human brain. By devising a map that explains how the 100 billion neurons in our brains connect, behave and operate, we’ll finally begin to approach an understanding of us that rivals the extent of what we know about the carbon cycle, the mating habits of the great white shark and the composition of Martian soil.
Besides practical applications, the NIH Brain Initiative will hopefully give us answers to questions, both profound and trivial, that have stumped even the greatest minds. For instance:
Why do we blush when we feel embarrassed or ashamed? What’s the evolutionary purpose of laughing and expressing humor? Why are yawns contagious? Why do we dream, and why are they sometimes so vivid and lucid as to seem as real as “reality”? Why did every primitive culture develop the idea of divine beings, and why do so many millions of people continue to subscribe to the cults built around them? What is consciousness exactly, and why must it be tied to one single person at all times? How can our brains be so goddamn complex — the best of which are able to devise new poetic forms and musical genres, theorize the existence of dark matter and sketch an accurately-detailed mural of the New York City skyline from memory — yet they are so clunky and inefficient that we often have difficulty recalling where we left our car keys or what we just read?
Practical results of this years-long study will not come overnight. Hopefully the BRAIN Initiative’s efforts will lead to new treatments and cures of neurological and neurodegenerative disorders and help us become happier, healthier beings. Besides that, who knows what else we might find buried deep in the coffers of the three pounds of matter that sits between our ears? Just as the Human Genome Project has led to advancements in molecular medicine, DNA forensics and bioarchaeology, the NIH’s research will likely have major neurological and societal implications that will change the face of humanity forever.
If you grew up in the 80s, you might remember a TV show called Tales from the Darkside. It was little more than a poor man’s Twilight Zone, but occasionally an episode aired that surprised and shocked you.
One such episode was titled “Mookie and Pookie.” I know — terrible names, but it gets better. The episode features two teenage twins, one of whom, Mookie, is dying from a terminal illness. In his few remaining days, he frantically works to complete the instructions for a sophisticated computer program that he makes Pookie promise she will carry out after his death. Once he dies, Pookie keeps her promise and obsessively follows her deceased brother’s instructions, buying exotic computer parts, assembling them, writing code. She does this despite not having a clue what might be the result and despite her parents’ insistence that she’s wasting her time and money on a project conceived of out of desperation. So the day arrives when she finishes the final step, and after she eagerly boots up the mystery machine, she hears the voice of — presto! — her late brother Mookie. He had risen from the dead! Sort of. Buried somewhere in the ones and zeros and computer circuitry is his consciousness, as present and aware as any healthy teenager — sans physical body.
The episode ends not with newly-digitized Mookie taking over the world’s electric and information infrastructure, but on a warm note with the entire family, computer-boy included, playing a round of Scrabble.
As hokey as Mookie and Pookie’s story is, cybernetic immortality might very well become a reality. Dmitry Itskov, a Russian businessman and founder of Initiative 2045, is currently seeking investors to fund research that will lead to eternal life — with a catch. The catch, of course, is that your body does not persist indefinitely; instead, your consciousness — what makes you you — lives on in a cybernetic Matrix-like environment.
But what’s a body other than a sack of meat to encase one’s consciousness?
That’s the official stance, at least, of Initiative 2045, whose main scientific goal is to “create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”
To repeat: a “more advanced non-biological carrier.” The explicit assumption is that what millions of years of biological evolution have granted us is vastly, unfathomably inferior to what a few short decades of computer research can achieve. Which is an amazing testament to human intelligence and ingenuity.
Initiative 2045 sees immortality as entirely plausible, a scientific problem that requires a gradual series of intermediary “trans-humanistic transformations,” starting with the replacement of body parts — limbs as well as organs — with non-biological, cybernetic components… and ultimately ending with the replacement of our meat sacks with ones and zeros.
This step-by-step process is analogous to futurist Ray Kurzweil’s concept of the “bridge to bridge” path to immortality, which is why he allegedly takes between 180 and 210 vitamin and mineral supplements a day: to sustain his carbon-based body long enough to see the day when he no longer needs his carbon-based body. Such a radical change in human existence — when shuffling off our mortal coils results not in our deaths but our cybernetic rebirths — unquestionably qualifies as a Singularity event.
As exciting as this all sounds, what remains to be answered by Dmitry Itskov and others is the existential nature of a life lived in cyberspace. What will people “do” with their time — infinite time for that matter? Will we fall in love, have families, go to work, play Scrabble? Will it be necessary to emulate a “normal” life, complete with the laws of physics and the need to eat and sleep? All we “know” is what we’ve seen in sci-fi classics such as William Gibson’s groundbreaking cyberpunk novel Neuromancer and the films Tron and The Matrix. But of course sci-fi tends to exaggerate the implications of speculative technology. Maybe cyberspace will end up as ho-hum as normal space often is.
Or maybe we’re already living in a computer simulation, as many have earnestly theorized. How would we know? After all, what we think of as “reality” is nothing more than a sophisticated construct our minds have created based on sensory data. Colors, sounds, flavors, pain, euphoria — these are all interpretations of the world beyond our senses. At the rate computer science is accelerating, it’s perfectly plausible to imagine an advanced human culture with the capability and means to replicate the experience of, well, life.
Consequently, if we are indeed living in a future culture’s simulation and, while in that simulation, devise a way to upload our consciousnesses in a separate cyberspace, there’s no end to the levels of Inception-like simulations we’re simultaneously experiencing.
Let’s just hope that at least one of them is more interesting than an afternoon playing Scrabble with our folks.
Any sufficiently advanced technology is indistinguishable from magic. ~Arthur C. Clarke
Let’s just say God works too slow. ~Magneto (Ian McKellen), in 2000’s X-Men
Once the stuff of folk religion, telepathy and telekinesis have in modern times been featured in the pages of comic books as well as on-screen in sci-fi and fantasy films, TV shows and video games.
Now, however, it seems likely that we’ll soon — i.e., in the next 30 years or so — be able to achieve what Stan Lee and other writers cooked up in X-Men, as demonstrated by superhuman mutant characters such as the telekinetic Jean Grey and the telepathic Charles Xavier.
Todd Coleman, an electrical engineer at the University of California in San Diego, is currently working on a temporary tattoo that would allow the wearer to operate machines with his thoughts or speak telepathically with others, presumably so long as they’re also tatted up with Coleman’s tiny device. Using bendable electrode arrays that easily attach to the skin, these so-small-they’re-almost-invisible tattoos pick up mental signals and convert them into commands.
Coleman and his research team hope to market the technology for use in the surgery room as well as the virtual cockpit, meaning drone pilots will one day be able to level Pakistani villages from the comfort of their living rooms. No word yet on whether we’ll be able to remotely control nine-foot blue aliens.
We already routinely practice magical thinking, of course, though not at the scale promised by Coleman’s tattoo tech. Whenever you willfully raise your arm or take a deep breath, you’re moving an object merely by thinking about it, which is the definition of telekinesis. Your thoughts send electrical signals from your brain to your arms, commanding them to rise, or to your lungs, commanding them to expand and receive air. What prevents you from scooting the chair away from the table without touching it is a simple lack of wiring — physical or otherwise — connecting your mind to said chair.
A lack of wiring is also what prevents those stricken with ALS and other neurodegenerative diseases from retaining control of their bodily movements. Help is allegedly on the way this year, though, in the form of thought-controlled robotic limbs, which will finally give paraplegics, quadriplegics and amputees the much-needed ability to perform simple everyday tasks. Coleman’s research could only increase the likelihood that no one unfortunate enough to contract ALS will be confined to a bed for the rest of his or her life. This application seems much more beneficial to humanity than facilitating drone strikes.
* * *
The difference between the right word and the almost-right word is the difference between lightning and the lightning-bug. ~Mark Twain
Whereas telekinesis has the potential to aid amputees and those with neurodegenerative illnesses as well as assist surgeons and pilots, telepathy has the potential to restructure the dynamics of human relationships at every level — parent-child, wife-husband, teacher-student, employer-employee, doctor-patient, ambassador-ambassador.
In fact, telepathy just might be the cure to war, famine and many other afflictions, human-made or otherwise.
For one thing, telepathy seems to be a more effective and efficient means to communicate with other people than verbal or even written language. Either the essence of what we mean is lost in the words, which, unless you’re a world-class poet, are irrefutably inadequate, or we can’t find the right words (and arrangement of words) to satisfactorily convey our message. In many cases, the wrong words can even confuse what we originally meant. Our thoughts, on the other hand, are pure, unfiltered and contain a much richer vocabulary — not just arbitrary, man-made words but also images, emotions, memories and other abstract, albeit meaningful, bits of data that we can never quite seem to translate into human language.
Consider the implications, both good and bad, of having the ability to speak telepathically with others.
With telepathic abilities, world leaders, ambassadors and Congresspeople could discuss important matters with no filters or blinding rhetoric and possibly find common ground. Assuming the beggar on the street corner has telepathic abilities, he could convey his hunger and desperation to passers-by much more effectively than any cardboard sign. A visit to the shrink would certainly be much more helpful than it is now. Having access to memories, emotions and anything else we often have trouble verbalizing would help the psychologist or psychiatrist determine and treat what’s giving us pain.
Telepathy as it’s described here sounds not unlike the Point of View Gun featured in the 2005 film The Hitchhiker’s Guide to the Galaxy. Coincidentally, the gun doesn’t appear in the novel of the same name.
What we fear the most about the idea of telepathy is that we would constantly have “people in our heads,” never giving us a moment’s peace. Or conversely, we’d always be privy to other people’s secrets we’d prefer not to know. Let’s assume for a moment that, were we to have the ability to speak telepathically, it could be shut off like a radio, giving us some level of control. With Facebook, after all, we can pick and choose who gets to see our status updates and whom we want to receive status updates from. Telepathic implants or tattoos might work similarly.
But then, blocking others from gaining entry into our minds would inevitably arouse suspicion.
The X-Men mutants designate themselves as a separate species to differentiate their kind from baseline humans: Homo superior. Able to bend the laws of physics in all imaginable ways, they’ve naturally crossed their own Singularity threshold. But as Arthur C. Clarke’s quote suggests, advanced technology such as we’re likely to witness in the twenty-first century — thanks to the research of Coleman and others — will give us abilities that can only be described as magical.
1948 – NEVER
In his 2005 book The End of Faith, Sam Harris writes the following:
The claims of mystics are neurologically quite astute. No human being has ever experienced an objective world, or even a world at all. You are, at this moment, having a visionary experience. The world that you see and hear is nothing more than a modification of your consciousness, the physical status of which remains a mystery. Your nervous system sections the undifferentiated buzz of the universe into separate channels… [which] are like different spectra of light thrown forth by the prism of the brain. We really are such stuff as dreams are made of.
In other words, everything we think we’re experiencing is nothing more than a simulation, a waking dream, a construct our brains have assembled from only five data streams.
What if, however, we could experience reality objectively, without filters? That is to say, what if we could “see” the world as it must be seen by the Judaic-Christian-Islamic conception of God, who doesn’t have to rely on corporeal eyes, ears, fingers, nose or tongue to gain knowledge of his surroundings? God doesn’t see a sunrise, which is all exterior, all surface. Instead, he sees everything that’s hidden behind the sunrise: the metadata, the coding, the “buzz of the universe.”
Could we humans ever achieve that sublime level of perception? Plenty of recreational drug-users as well as shamans claim to do as much by taking heavy doses of psychotropic hallucinogens or practicing deep meditation. In such a state, the ego melts away and, with it, the senses. The perceiver no longer smells or hears or touches his surroundings, but experiences them. The rise of religions both ancient and modern probably owes much more to shamans’ use of psilocybin mushrooms than the common churchgoing crowd would care to admit.
Is there a more efficient way to experience the buzz?
Perhaps there’s no need for us to. Perhaps we wouldn’t know what to do with so much data, pelting our minds like a never-ending hailstorm. After all, our bodies have evolved over the millennia to exclude all but five data streams. Like an old piece of hardware, we’re plugged in to reality using only these five ports. This simplifies what we can and cannot perceive. Perhaps this simplification once aided our progression as a species. Had we always been able to take in everything, maybe we wouldn’t have given enough attention to essential right-here-right-now tasks like locating a water source or building a fire. Instead, we would have found greater meaning in lying on our hairy backs, marveling (or stymied) by the limitless cosmos chattering all around us.
And that’s about as far as we would have gotten as a species.
But certainly by now we’ve located enough water sources and built enough fires to sustain us while we subject ourselves to the buzzing of the universe. Could we somehow artificially, cybernetically unite these “separate channels,” as Harris calls them, into one — like some top-of-the-line HDMI cable? Many synesthetes, after all, can “taste” colors and “see” music, so we already have the (rarely) natural ability to hotwire our brains’ interpretation of sensory data.
A better question might be: could we somehow artificially remove these channels altogether? It seems as if this would be the only way to truly experience an “objective world.”
For a little fun, check out this aggressively psychedelic video simulating what it’s like to experience the world with synesthesia.
“Performance philosopher” Jason Silva understands the Singularity. It’s obvious from his infectious can’t-quite-spit-the-words-out-fast-enough excitement that he’s spent a lot of time marinating in the idea that, within the next 30 years or so, things are “going to get really weird.”
For the uninitiated, and for those who just can’t spare four and a half minutes to watch Silva’s video, the Singularity is a physics term co-opted by futurist and inventor Ray Kurzweil to describe a worldwide event so hugely defining, so epoch-making, so miraculous in its brilliance that all of human existence bottlenecks and, once it emerges on the other side, is dramatically changed forever — changed so greatly, in fact, that people standing on the pre-Singularity side of history cannot reasonably begin to comprehend life as it has become on the other side.
As Silva puts it, try explaining something as complex as a Shakespearean sonnet to one of our ancient prelingual cave-dwelling ancestors. She would probably lack the cognitive muscle to process not only the meaning of figurative language but also language itself. The sounds emanating from your mouth would be just that — sounds. Your way of life would be an existential mystery to her. The ability to speak and communicate and share ideas both practical and metaphysical has enabled humanity to make unimaginable strides in a limitless number of areas. Had we never discovered the ability to make the sounds in our mouths mean something other than what they literally are — sounds — would any of this have been possible?
You wouldn’t be reading this blog, for sure.
So when will the next Singularity event occur?
As always, the Future Culturalist refuses to give specifics. But he will acknowledge that not one but three events as great as the invention of language will take place sometime before 2099. For now, all we can do is speculate. Such speculation is tricky, as we’ve already pointed out. Could our knuckle-dragging forebears ever have anticipated where language would take us? How would we go about explaining the Internet to an ancient, even one as ahead-of-his-time intelligent as, say, Socrates?
Kurzweil — who has made some bold predictions over the years, some of them accurate, most of them not-so-accurate — predicts that the next Singularity will be reached by 2045, when the line separating man from machine will blur completely. The reverse-engineering of human brains will allow us to build smarter computers capable of much more than making lightning-fast calculations. Indeed, they will be able to think and understand symbolic language and feel like a human. Conversely, breakthroughs in nanotechnology will allow us to implant tiny yet powerful computers in our brains, perhaps even replace large sections of our brains with artificial components, thereby boosting our computational speed and accuracy, not to mention improving our memories.
Ever walk in a room, only to forget why you entered it in the first place? Yeah, that will be going away.
However, the real story isn’t that we’ll be smarter than we are now. The story is that, for the first time in history, one of our inventions — the computer — will become our peer. What will the difference be between a machine with human-like abilities (learning, thinking, reasoning, feeling) and a human with computer-like abilities (making instantaneous calculations, making sophisticated predictions, housing and retrieving vast amounts of data accurately)? Granted, one was assembled in a factory whereas the other slipped crying and screaming from its mother’s vulva.
But the day will indeed arrive when humans and machines see each other as equals, relate to one another — even existentially and religiously — and share more similarities than differences. When that day comes, we’ll know for sure that we’ve passed through yet another Singularity. When that day comes, humanity as we know it will have been irreversibly changed and redefined. We who stand on this side of the Singularity cannot fathom the far-reaching implications of such an event any more than our prelingual ancestors could fathom the implications of human language.
Any Star Wars fan knows that C-3PO is designed for human-cyborg relations. He helps humans (as well as non-humans) communicate with clunky droids such as R2-D2 and even the Millennium Falcon’s hyperdrive engine. As companionable as the ever-fussy C-3PO is, the imminent Singularity will no doubt prove that we have no need for his kind’s services. The reason? An intercessor will not be necessary between two beings who feel one-and-the-same. We will have already become C-3PO, and C-3PO will have already become us.
Plus, the blurring of boundaries between humans and machines might bring new meaning to “human-cyborg relations.”
Silva warned us that things were going to get weird.