1948 – NEVER
For as long as humans have wandered the earth, our mortality has been front and center in our long list of woes. In every culture, in every age, many people have attempted to cheat death, one of the most famous examples of which includes Qin Shi Huang, king of the Chinese state of Qin in the third century BCE. Obsessed with living forever, he ordered his alchemists and physicians to concoct an elixir of life. They obliged and presented him with what they believed might grant him eternal life. Unfortunately for Qin Shi Huang, what they gave him was a handful of mercury pills, and he died upon consuming them.
We’ve come a long way since Qin’s day, so much so that immortality — or at least unprecedented longevity — appears increasingly plausible sometime this century. Inventor and futurist Ray Kurzweil seems so sure of it that he allegedly takes upwards of 200 dietary supplements a day to forge a “bridge to a bridge” when long life is the norm. The May 2013 issue of National Geographic, in fact, features this very topic.
For now, however, they say we die twice: once when we take our last breath, and again when our name is uttered for the last time.
Our greatest literature, both ancient and modern, seems to confirm this attitude. Countless examples suggest that as much as we strive to achieve everlasting life, death is our inescapable fate. To seek a loophole is folly and smacks of the worst kind of hubris. The earliest such tale, over twelve thousand years old, relates the ancient Mesopotamian king Gilgamesh’s quest for everlasting life following the death of his friend Enkidu. Although Gilgamesh ultimately fails in his undertaking, he achieves a sort of immortality in the minds of his people as a result of his heroic exploits. The same arrogance is seen in the character of Greek demigod Achilles, who was said to be impervious to harm in all parts of his body except his ankle, which his mother Thetis failed to immerse in the river Styx. Near the end of the Trojan War, he is slain by the lethal accuracy of Paris’s arrow, but Achilles’s courageous feats guarantee that his name lives on into perpetuity.
For those of us who lack the godlike strength and derring-do of Gilgamesh, Achilles, Heracles and other ancient and Classical heroes, the only hope we have at gaining immortality is through emerging age-reversing technology and research into the human brain. Our two leading options appear to be an indefinite halt to the aging process or a sort of digital resurrection — uploading our minds into vast computer servers. But are either of these options desirable?
The former option, the perpetuation of our corporal bodies, seems at this point to be more scientifically plausible but far less satisfactory. Many stories warn of the dangers of unnaturally extending the shelf-life of our flesh and bones. The legend of the Wandering Jew, for instance, convinces us that everlasting life is a curse, a waking nightmare that results only in unfathomable despair and desperation. According to the legend, the old man scours the world seeking someone who will exchange his mortality for his cursed immortality. For two centuries now, Mary Shelley’s gothic novel Frankenstein; or, the Modern Prometheus has terrified readers with the personal, societal and religious implications of reanimating dead tissue. Alphaville’s 1980s anthem of youth “Forever Young” rejects the notion of immortality for its own sake:
It’s so hard to get old without a cause
I don’t want to perish like a fading horse
Youth’s like diamonds in the sun
And diamonds are forever
Forever young, I want to be forever young
Do you really want to live forever, forever and ever?
What’s the use of everlasting life, Alphaville argues, if we can’t maintain a youthful spirit? Better to die with a hopeful eye on the future than to trudge meaninglessly though eternity.
Poets routinely insist that the only fulfilling way for us to achieve immortality is through our art and innovations. In Shakespeare’s “Sonnet 18,” the speaker promises a youth or possible lover that “thy eternal summer shall not fade, / … Nor shall death brag thou wander’st in his shade.” Because he has composed the sonnet in her honor, her memory will last for as long as the poem exists: “So long as men can breathe, or eyes can see, / So long lives this, and this gives life to thee.”
Of course, there are just as many counterarguments to the idea that art leads to eternal life. Romantic poet Percy Shelley’s poem “Ozymandias” tells of a wanderer who comes across a “lifeless,” eroded statue in the desert, whose pedestal reads:
My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!
Despite the once-grandness of the statue, “Nothing beside remains. Round the decay / Of that colossal Wreck, boundless and bare / The lone and level sands stretch far away.” Even this mysterious king’s exploits and fame – whatever they might have been – couldn’t save his memory from the ravishes of time. Not only has he died the first time but, as evidenced by the wasteland of his forgotten realm, the second time as well. American filmmaker Woody Allen echoes this sentiment: “I don’t want to achieve immortality through my work. I want to achieve it through not dying.”
But the question remains — is not dying desirable?
If most of us one day have the opportunity to extend our lives indefinitely, how will that change the dynamics of society and culture? A typical person living to 80 years of age goes through several dramatic changes in his lifetime: his opinions and attitudes change, his interests, his friends, his career, sometimes even how he remembers the past. Imagine how much change would take place in a thousand years of life! You wouldn’t be a shadow of the person you once were. Some workers put in 30 or 40 years’ worth of service at a single company or organization, or work in a single industry for as many years, but how dull it would be to continue beyond that. We celebrate when couples reach fifty years of marriage, but could any of them reach 100 years? Two hundred? A thousand? A little over half of marriages end in divorce already. Would couples, knowing that they are going to live for hundreds of years, wed with the firm understanding that they will eventually split? How would immortality affect patriotism?
Let’s pretend for a moment that the Wandering Jew really exists. For close to two thousand years, he has shuffled down countless roads, cane in hand, trying to find some fool to take his place. He clearly cannot be the same person now as he was during the time of the Romans. He’s seen far too much and met far too many people to hold on to whatever prejudices he once had. What “science” he might have believed as a young man has since been obliterated. The language he spoke for centuries, Aramaic, will soon die out. His ancient brand of Jewish is no longer. He claims no country as his own. Having lived to be two thousand years old, he has seen the rise and fall of dozens of nations and empires. He has come to realize the arbitrariness and fragility of borders as well as tribal and national pride.
Leaving aside the unpleasantness of experiencing eternity as a decrepit old man and being charged with the impossible task of giving away your decripitude, what is it about immortality that attracts people so? As Caesar declares in Shakespeare’s play:
The second option to immortality involves uploading our minds onto computer servers, a solution advocated by thinkers such as Kurzweil and Dmitry Itskov. Doing so would immediately eliminate many of the problems outlined above. You need not age in a digital landscape, for one thing. And since you’re whole existence amounts to lines of computer code, you could conceivably “program” yourself to avoid feeling depression, sadness, doubt and other negative emotions.
But there are other problems in this scenario.
If we upload our minds onto computers, we can “live” for as long as we wish, or as long as the data remains properly archived and resistant to fragmentation, viruses and hacking. After all, the official Space Jam website hasn’t aged a day since it launched back in 1996. But even if every last facet of our memories, temperament, interests, dislikes and habits carry over into the merry old land of ones and zeros, are the digital copies really “us” — the essential us — or simply clever simulations? What’s lost, if anything, in the transfer from a carbon-based world to a silicon world? Perhaps the earliest available opportunities to experience immortality will be faulty and disastrous, resulting in regretfully botched versions of our psyches.
Let’s say you upload your mind today. Now there are two “yous,” the analog you and the digital you. After your analog self dies, your digital self “lives” on. It will no doubt continue to assert that it is just as “real” as you ever were because it has the same memories, the same personality, the same tics and religious beliefs and tastes in women (or men, or both). Otherwise, how can it claim to be you? One of the problems here, if indeed there is one, is that you — the meat sack version — won’t survive to enjoy the immortality you’ve passed on to this immaterial copy of yourself.
Is “good enough” simply not good enough?
We place such a high premium on authenticity. Even if the digital copy of yourself is identical in every possible way, it’s still not the “you” that emerged from your mother’s womb. The same argument can be made with regard to art forgeries, some of the best of which are sold at auction as the real deal. Shaun Greenhalgh, possibly history’s most successful art forger, was so good, he managed to dupe both casual and expert art enthusiasts for years and make close to a million pounds before being caught. Anyone who has one of his remarkably convincing pieces sitting in their house — one of his Rodin knockoffs, for instance — is reasonably entitled to tell visitors that they do indeed have a Rodin. There’s nothing about the piece that gives away its deception, other than the abstract notion of its inauthentic origin. But for most people, that’s enough. No matter what the piece looks like, either Rodin sculpted it with his own hands or he didn’t. Similarly, no matter how convincingly “real” a digital life might be, there are those who would refuse such a life because it lacks the nebulous idea of authenticity.
Of course, like Greenhalgh’s Rodin piece, and as we’ve already discussed, there’s no certifiable way to disprove that what you think is reality is actually a fraud. How do you know you’re not already living in a sophisticated computer simulation right now?
Gilgamesh and Qin Shi Huang’s quest for everlasting life might come to a close sometime this century. Before that happens, however, we must discuss the implications and consequences of a world in which death is no longer certain. Emily Dickinson, abandoning the desire to live forever, muses: “That it will never come again is what makes life so sweet.” Since immortality will surely become a reality, we must reassess the sweetness in life.
When we hear the word “cyborg,” we think of an emotionless being that has completely lost or was never granted its individuality or right to privacy. We think of the worst kind of collectivist entrapment, a state of perpetual mindlessness that seeks only to follow directives passed down from some higher authority. We think of the Terminator, Robocop and Star Trek’s Seven of Nine.
That being said, the negative attitude we harbor toward the idea of cyborgs has led to a massive backlash against Google Glass, which many people feel is an assault on privacy and individuality. An advocacy group, Stop the Cyborgs, is in fact campaigning to limit the use of intrusive devices such as Google Glass with the intent to “stop a future in which privacy is impossible and central control total.” Likewise, some businesses have already banned it from being worn on their premises. The first such establishment, the 5 Point Cafe in Seattle — which describes Google Glass as a “new fad for the fanny-pack wearing never removing your bluetooth headset wearing crowd” — has now aligned itself with Star Wars’s droid-hating Mos Eisley Cantina.
It should be noted that the 5 Point Cafe’s banning of Google Glass is done somewhat out of respect for its patrons’ right to privacy, somewhat to be sardonic, somewhat to rabble-rouse and attract media attention — but mostly because the thing looks, well, dumb. Its faux-futuristic, Apple Store aesthetic doesn’t fit in with the cafe’s Seattle counterculture, hole-in-the-wall reputation. Their slogan, after all, is “Alcoholics serving alcoholics since 1929.”
The 5 Point Cafe’s disapproval of Google Glass also says a lot about the majority of Americans’ attitudes toward what they perceive as a gradual loss of privacy and individual freedoms due to technological intrusion. Smartphones are just as guilty of this as Google Glass, but the latter’s always-visible, always-on, always-pointed-at-you functionality crosses a line that makes many people uncomfortable. We just want to be left the hell alone. The idea of being secretly filmed — by any device, for any reason — makes us squirm, even though we’re knowingly caught on surveillance cameras dozens if not hundreds of times a day. We desire privacy and respect for what makes each of us unique, and when we don’t get it, we feel less-than-human. We feel as if we’re being treated like an animal.
Or worse, we feel as if we’re being treated like a cyborg, which is essentially a tool. And since tools don’t receive empathy or privacy, neither should a cyborg.
So maybe this is why the Mos Eisley Cantina’s barkeep gets all huffy when Luke tries to enter with his recently acquired droids. Although they appear to have emotions and personalities, C-3PO and R2-D2 are really cybernetic frauds, artificial charlatans trying vainly to pass themselves off as equals to other Cantina patrons. It’s an insult. To the surly proprietor, droids’ transparent mimicry of self-awareness and entitlement to certain rights sentient beings enjoy mocks the privilege of actually being a sentient being.
This repulsion toward androids and cyborgs can be described as the uncanny valley effect, first described by roboticist Masahiro Mori in 1970. Simply put, when we are confronted with a robot that resembles a human but doesn’t get human behavior quite right — their eyes might not blink like ours or their movements might appear too jerky or calculated — it creeps us out.
And so it is with Google Glass. When we eventually start seeing people on the streets wearing Google Glass, it will surely give some observers unease and skepticism.
They might ask: What are they doing with that thing? Am I being recorded or filmed? When I speak to them, are they tuning me out by listening to music, watching a movie or checking the weather forecast? Are they mentally correcting my factual errors using Wikipedia without my knowledge? Are they using face-recognition technology to scan and analyze me? Do they know all about me — my name, my Social Security number, my past, my secrets?
What part of their humanity and uniqueness did they have to give up to enjoy the benefits of Google Glass?
As admirable as Stop the Cyborgs and 5 Point Cafe’s efforts may be, there’s little hope that the cyborg-ification of humans will stop. No child wants to grow up to be a cyborg, yet humanity is increasingly becoming cybernetic. Many people cannot reasonably function without the use of hearing aids, artificial hips, mind-controlled prosthetic limbs or computerized speech generators. These devices are necessities, and no one faults their users for taking advantage of them. Google Glass is admittedly a different beast altogether, as it is an elective tool and could be used to violate non-wearers’ privacy.
But right or wrong, it’s only the beginning. From retinal implants that perform the same tasks as Google Glass and more, to telekinetic tattoos and nanobots, we’ll be so hard-wired with tech that, as futurists such as Kurzweil predict, the line separating man and machine will blur.
By then, will we even care about abstract liberties such as privacy and individuality?
It’s almost impossible to fathom now, but perhaps in the future we’ll look back and wonder why we cherished our individuality so much and resisted collectivism. After all, privacy as we now know it is a relatively modern phenomenon that we take for granted. Most of us wouldn’t be able to tolerate the constant physical togetherness and lack of solitude that defined a medieval European lifestyle. But since then we’ve readjusted our attitudes toward privacy and individuality, and chances are they will need to be readjusted again. Perhaps once most of us are wired to communicate telepathically and always be aware of each other’s locations and identities, we’ll find popular twentieth- and twenty-first-century depictions of cyborgs to be quaint, naïve and, yes, even a little offensive.
If you grew up in the 80s, you might remember a TV show called Tales from the Darkside. It was little more than a poor man’s Twilight Zone, but occasionally an episode aired that surprised and shocked you.
One such episode was titled “Mookie and Pookie.” I know — terrible names, but it gets better. The episode features two teenage twins, one of whom, Mookie, is dying from a terminal illness. In his few remaining days, he frantically works to complete the instructions for a sophisticated computer program that he makes Pookie promise she will carry out after his death. Once he dies, Pookie keeps her promise and obsessively follows her deceased brother’s instructions, buying exotic computer parts, assembling them, writing code. She does this despite not having a clue what might be the result and despite her parents’ insistence that she’s wasting her time and money on a project conceived of out of desperation. So the day arrives when she finishes the final step, and after she eagerly boots up the mystery machine, she hears the voice of — presto! — her late brother Mookie. He had risen from the dead! Sort of. Buried somewhere in the ones and zeros and computer circuitry is his consciousness, as present and aware as any healthy teenager — sans physical body.
The episode ends not with newly-digitized Mookie taking over the world’s electric and information infrastructure, but on a warm note with the entire family, computer-boy included, playing a round of Scrabble.
As hokey as Mookie and Pookie’s story is, cybernetic immortality might very well become a reality. Dmitry Itskov, a Russian businessman and founder of Initiative 2045, is currently seeking investors to fund research that will lead to eternal life — with a catch. The catch, of course, is that your body does not persist indefinitely; instead, your consciousness — what makes you you — lives on in a cybernetic Matrix-like environment.
But what’s a body other than a sack of meat to encase one’s consciousness?
That’s the official stance, at least, of Initiative 2045, whose main scientific goal is to “create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”
To repeat: a “more advanced non-biological carrier.” The explicit assumption is that what millions of years of biological evolution have granted us is vastly, unfathomably inferior to what a few short decades of computer research can achieve. Which is an amazing testament to human intelligence and ingenuity.
Initiative 2045 sees immortality as entirely plausible, a scientific problem that requires a gradual series of intermediary “trans-humanistic transformations,” starting with the replacement of body parts — limbs as well as organs — with non-biological, cybernetic components… and ultimately ending with the replacement of our meat sacks with ones and zeros.
This step-by-step process is analogous to futurist Ray Kurzweil’s concept of the “bridge to bridge” path to immortality, which is why he allegedly takes between 180 and 210 vitamin and mineral supplements a day: to sustain his carbon-based body long enough to see the day when he no longer needs his carbon-based body. Such a radical change in human existence — when shuffling off our mortal coils results not in our deaths but our cybernetic rebirths — unquestionably qualifies as a Singularity event.
As exciting as this all sounds, what remains to be answered by Dmitry Itskov and others is the existential nature of a life lived in cyberspace. What will people “do” with their time — infinite time for that matter? Will we fall in love, have families, go to work, play Scrabble? Will it be necessary to emulate a “normal” life, complete with the laws of physics and the need to eat and sleep? All we “know” is what we’ve seen in sci-fi classics such as William Gibson’s groundbreaking cyberpunk novel Neuromancer and the films Tron and The Matrix. But of course sci-fi tends to exaggerate the implications of speculative technology. Maybe cyberspace will end up as ho-hum as normal space often is.
Or maybe we’re already living in a computer simulation, as many have earnestly theorized. How would we know? After all, what we think of as “reality” is nothing more than a sophisticated construct our minds have created based on sensory data. Colors, sounds, flavors, pain, euphoria — these are all interpretations of the world beyond our senses. At the rate computer science is accelerating, it’s perfectly plausible to imagine an advanced human culture with the capability and means to replicate the experience of, well, life.
Consequently, if we are indeed living in a future culture’s simulation and, while in that simulation, devise a way to upload our consciousnesses in a separate cyberspace, there’s no end to the levels of Inception-like simulations we’re simultaneously experiencing.
Let’s just hope that at least one of them is more interesting than an afternoon playing Scrabble with our folks.
1948 – NEVER
According to the MIT Technology Review, futurist and inventor Ray Kurzweil’s new job at Google entails designing and building a type of artificial intelligence that will serve as a super-sophisticated personal assistant — or what will eventually become humanity’s godlike overlord.
The details of the new AI system are frustratingly scarce. Presumably it will gather and analyze all available information in Google’s databases (in other words, the entirety of human knowledge) and even listen in on phone conversations and read emails.
No, we’re not making this up.
As fascinating as this ambitious project sounds, we can’t help but wonder if Kurzweil has ever seen 2001: A Space Odyssey or the Terminator series.
Both 2001 and Terminator resonate as cautionary tales of the dangers of unrestrained, sentient artificial intelligence. The two narratives tap into humanity’s unconscious and ancient apprehension of technology we don’t fully comprehend, a theme that can be traced at least as far back as the Jewish folktale of the Golem.
Of course, there are a few popular stories that convey the opposite theme, that artificial intelligence will be humanity’s salvation. A classic example is the Isaac Asimov short story “The Last Question,” which Asimov felt was his best work. (Seriously, if you haven’t already read it, do so now. We’ll wait.)
At this point, what Google and Kurzweil are up to is a bit unclear. Don’t most of us already have access to the full extent of human knowledge crammed in our pockets? Whether their adventures in AI will give us smartphones on steroids or spell the end of humanity, we’ll continue to feature more news along the way.
“Performance philosopher” Jason Silva understands the Singularity. It’s obvious from his infectious can’t-quite-spit-the-words-out-fast-enough excitement that he’s spent a lot of time marinating in the idea that, within the next 30 years or so, things are “going to get really weird.”
For the uninitiated, and for those who just can’t spare four and a half minutes to watch Silva’s video, the Singularity is a physics term co-opted by futurist and inventor Ray Kurzweil to describe a worldwide event so hugely defining, so epoch-making, so miraculous in its brilliance that all of human existence bottlenecks and, once it emerges on the other side, is dramatically changed forever — changed so greatly, in fact, that people standing on the pre-Singularity side of history cannot reasonably begin to comprehend life as it has become on the other side.
As Silva puts it, try explaining something as complex as a Shakespearean sonnet to one of our ancient prelingual cave-dwelling ancestors. She would probably lack the cognitive muscle to process not only the meaning of figurative language but also language itself. The sounds emanating from your mouth would be just that — sounds. Your way of life would be an existential mystery to her. The ability to speak and communicate and share ideas both practical and metaphysical has enabled humanity to make unimaginable strides in a limitless number of areas. Had we never discovered the ability to make the sounds in our mouths mean something other than what they literally are — sounds — would any of this have been possible?
You wouldn’t be reading this blog, for sure.
So when will the next Singularity event occur?
As always, the Future Culturalist refuses to give specifics. But he will acknowledge that not one but three events as great as the invention of language will take place sometime before 2099. For now, all we can do is speculate. Such speculation is tricky, as we’ve already pointed out. Could our knuckle-dragging forebears ever have anticipated where language would take us? How would we go about explaining the Internet to an ancient, even one as ahead-of-his-time intelligent as, say, Socrates?
Kurzweil — who has made some bold predictions over the years, some of them accurate, most of them not-so-accurate — predicts that the next Singularity will be reached by 2045, when the line separating man from machine will blur completely. The reverse-engineering of human brains will allow us to build smarter computers capable of much more than making lightning-fast calculations. Indeed, they will be able to think and understand symbolic language and feel like a human. Conversely, breakthroughs in nanotechnology will allow us to implant tiny yet powerful computers in our brains, perhaps even replace large sections of our brains with artificial components, thereby boosting our computational speed and accuracy, not to mention improving our memories.
Ever walk in a room, only to forget why you entered it in the first place? Yeah, that will be going away.
However, the real story isn’t that we’ll be smarter than we are now. The story is that, for the first time in history, one of our inventions — the computer — will become our peer. What will the difference be between a machine with human-like abilities (learning, thinking, reasoning, feeling) and a human with computer-like abilities (making instantaneous calculations, making sophisticated predictions, housing and retrieving vast amounts of data accurately)? Granted, one was assembled in a factory whereas the other slipped crying and screaming from its mother’s vulva.
But the day will indeed arrive when humans and machines see each other as equals, relate to one another — even existentially and religiously — and share more similarities than differences. When that day comes, we’ll know for sure that we’ve passed through yet another Singularity. When that day comes, humanity as we know it will have been irreversibly changed and redefined. We who stand on this side of the Singularity cannot fathom the far-reaching implications of such an event any more than our prelingual ancestors could fathom the implications of human language.
Any Star Wars fan knows that C-3PO is designed for human-cyborg relations. He helps humans (as well as non-humans) communicate with clunky droids such as R2-D2 and even the Millennium Falcon’s hyperdrive engine. As companionable as the ever-fussy C-3PO is, the imminent Singularity will no doubt prove that we have no need for his kind’s services. The reason? An intercessor will not be necessary between two beings who feel one-and-the-same. We will have already become C-3PO, and C-3PO will have already become us.
Plus, the blurring of boundaries between humans and machines might bring new meaning to “human-cyborg relations.”
Silva warned us that things were going to get weird.