Category Archives: People
How will humanity be different in the year 2099?
When Friedrich Nietzsche proclaimed that “God is dead” in 1882, it caused a major uproar in his native Germany and the rest of Europe. Although not strictly an atheistic statement — more likely he was criticizing how far removed from the biblical, Abrahamic concept of God our industrialized culture has carried us — his heretical words would doubtlessly have resulted in his execution had he uttered them a few hundred years earlier.
And now, in 2013, a woman can admit on live television that she’s atheist with dignity and total impunity. In fact, her interviewer — in this case, CNN anchor Wolf Blitzer — chuckles at her admission (and perhaps at his own embarrassment for pressing her to “thank the Lord”) and even warmly touches her shoulder.
By now everyone has seen this short exchange between Blitzer and 30-year-old Oklahoman Rebecca Vitsmun, who, along with her 18-month-old son and husband, survived the recent tornado that killed 24 people as of this writing. For such a seemingly unimportant news clip, it has brewed a national dialogue about the state of religion in America and where it is headed as well as given figurative ammo to loud voices on both sides of the culture war:
Atheists: “See? Good things happen even to those who worship no god.”
Believers: “True, but this event will finally compel her to have faith in miracles and guardian angels.”
A: “Miracles, shmiracles. The tornado was a natural event without purpose, and her survival was based purely on hasty decision-making and dumb luck.”
B: “So she survived, but for her blasphemous denial of all that is holy, she will spend eternity in hell.”
A: [eating a sandwich] “She seems like a pretty cool mom.”
B: [turning red in the face] “God set this tornado upon Oklahoma to teach the state’s lone atheist a lesson… and ended up inadvertently taking the lives of her neighbors… while sparing hers and her family’s…”
A: “I guess a burning bush would’ve been too subtle.”
And so on.
Thirteen million. That’s how many atheists and agnostics are estimated to be living now in the U.S. alone. Although a small fraction of the nation’s population — it’s just under 3% — this number rises exponentially every passing year. Why? Is it because of our lack of morals? Our growing disinterest in tradition?
Possibly. But the main culprit is hands-down science.
Over the course of the last four hundred years — since at least Galileo and the dawn of modern science — the work of biologists and physicists has shown us, empirically so, that ours is a universe with no author or purpose. Charles Darwin and later evolutionists such as Stephen Jay Gould proved that we and other species are here not because of some “grand watchmaker” but instead because of natural selection and other complex, observable processes. British physicist Stephen Hawking’s book The Grand Design (2010) and American physicist Lawrence Krauss’s book A Universe from Nothing (2012) both convincingly argue that a god is not required to create a universe or the life that inhabits it. Contemporary research into quantum mechanics suggests that our existence rests on the chaotic behavior of subatomic particles, some of which can inexplicably exist in more than one spot at any given moment. Albert Einstein, in fact, rejected the implications of the randomness and unpredictability that is inherent and essential to quantum theory, saying “God does not play dice.”
As Hawking himself has explained, science has not disproved nor will it ever disprove the existence of God. The role of science is to measure and speculate on that which is observable or at least testable, of which God is neither.
However — and this is a biggie — science shows us more and more that the idea of God is irrelevant. The creation of the universe, the origin of species, the “miraculous” events in our lives such as surviving a tornado — whether you believe in God or not, these things occur everyday without help from a divine being. (If you think ours is the only universe, or that universes can’t spring up randomly, read here.)
“I don’t blame anyone for thanking the Lord,” Rebecca Vitsmun said as she hugged her child closely. What a classy, non-douchey attitude toward religiosity. Perhaps in 2099, when atheists far outnumber believers in the First World, we’ll demonstrate the same sort of anthropological respect toward the biblical concept of an all-powerful, all-knowing god.
After all, this is how we feel about Zeus and Odin and Ra, once-powerful but now “dead” deities. It’s simply evolution for the Abrahamic god to follow.
As children we all played the game where we competed to see who could hold their breath longer than anyone else in the pool. Think back. How long could you hold it before the burning sensation compelled you to surface and gulp lungfuls of air? Thirty seconds? A minute? A minute and a half?
As impressive as those times are, you soon might be able to hold your breath for up to 30 minutes without any adverse effects.
Amazingly, scientists at the Boston Children’s Hospital have devised a microparticle that, when injected into the bloodstream, can super-oxygenate a person’s blood and allow them to live for up to 30 minutes without having to take a single breath.
Currently the person with the distinction of holding his breath the longest is German Tom Sietas, who in June 2012 remained underwater for a staggering 22 minutes and 22 seconds. With the new technology, you can hold your breath for a further eight minutes.
Such a scientific and medical breakthrough has countless applications, the most obvious of which is to save lives in emergency rooms and hospitals. Every household’s medicine cabinet might one day store a microparticle-dispenser of some kind next to the Tylenol and cough medicine that can be administered to a family member who happens to get a chicken bone lodged in his throat. Long-distance runners might use the technology to ensure that their blood receives enough oxygen. As a precaution, parents might give their children a shot of super-oxygen before dropping them off at the community pool or taking them out on the fishing boat. People who enjoy erotic asphyxiation — the act of deliberately restricting airflow for sexual pleasure — can have a treatment on hand in case things get too carried away. (Of course, someone else would need to be present to give them the dosage, since they would be unconscious.)
Imagine the military applications. Soldiers who never tire? Navy SEALs who need not surface for air until the most opportune time?
The DC character Aquaman, who can speak telepathically with sea creatures as well as breathe underwater, often gets mocked by readers and geeks for having the least useful and desirable superpowers among his fellow Justice League members.
As funny as the Family Guy clip is, no one would scoff at a person’s amazing ability to hold his breath for half an hour, thereby making him King of the Pool.
But like any new cutting-edge technology, it might take some time before these so-called microparticles are available for general consumption. So, you know, don’t hold your breath.
For as long as humans have wandered the earth, our mortality has been front and center in our long list of woes. In every culture, in every age, many people have attempted to cheat death, one of the most famous examples of which includes Qin Shi Huang, king of the Chinese state of Qin in the third century BCE. Obsessed with living forever, he ordered his alchemists and physicians to concoct an elixir of life. They obliged and presented him with what they believed might grant him eternal life. Unfortunately for Qin Shi Huang, what they gave him was a handful of mercury pills, and he died upon consuming them.
We’ve come a long way since Qin’s day, so much so that immortality — or at least unprecedented longevity — appears increasingly plausible sometime this century. Inventor and futurist Ray Kurzweil seems so sure of it that he allegedly takes upwards of 200 dietary supplements a day to forge a “bridge to a bridge” when long life is the norm. The May 2013 issue of National Geographic, in fact, features this very topic.
For now, however, they say we die twice: once when we take our last breath, and again when our name is uttered for the last time.
Our greatest literature, both ancient and modern, seems to confirm this attitude. Countless examples suggest that as much as we strive to achieve everlasting life, death is our inescapable fate. To seek a loophole is folly and smacks of the worst kind of hubris. The earliest such tale, over twelve thousand years old, relates the ancient Mesopotamian king Gilgamesh’s quest for everlasting life following the death of his friend Enkidu. Although Gilgamesh ultimately fails in his undertaking, he achieves a sort of immortality in the minds of his people as a result of his heroic exploits. The same arrogance is seen in the character of Greek demigod Achilles, who was said to be impervious to harm in all parts of his body except his ankle, which his mother Thetis failed to immerse in the river Styx. Near the end of the Trojan War, he is slain by the lethal accuracy of Paris’s arrow, but Achilles’s courageous feats guarantee that his name lives on into perpetuity.
For those of us who lack the godlike strength and derring-do of Gilgamesh, Achilles, Heracles and other ancient and Classical heroes, the only hope we have at gaining immortality is through emerging age-reversing technology and research into the human brain. Our two leading options appear to be an indefinite halt to the aging process or a sort of digital resurrection — uploading our minds into vast computer servers. But are either of these options desirable?
The former option, the perpetuation of our corporal bodies, seems at this point to be more scientifically plausible but far less satisfactory. Many stories warn of the dangers of unnaturally extending the shelf-life of our flesh and bones. The legend of the Wandering Jew, for instance, convinces us that everlasting life is a curse, a waking nightmare that results only in unfathomable despair and desperation. According to the legend, the old man scours the world seeking someone who will exchange his mortality for his cursed immortality. For two centuries now, Mary Shelley’s gothic novel Frankenstein; or, the Modern Prometheus has terrified readers with the personal, societal and religious implications of reanimating dead tissue. Alphaville’s 1980s anthem of youth “Forever Young” rejects the notion of immortality for its own sake:
It’s so hard to get old without a cause
I don’t want to perish like a fading horse
Youth’s like diamonds in the sun
And diamonds are forever
Forever young, I want to be forever young
Do you really want to live forever, forever and ever?
What’s the use of everlasting life, Alphaville argues, if we can’t maintain a youthful spirit? Better to die with a hopeful eye on the future than to trudge meaninglessly though eternity.
Poets routinely insist that the only fulfilling way for us to achieve immortality is through our art and innovations. In Shakespeare’s “Sonnet 18,” the speaker promises a youth or possible lover that “thy eternal summer shall not fade, / … Nor shall death brag thou wander’st in his shade.” Because he has composed the sonnet in her honor, her memory will last for as long as the poem exists: “So long as men can breathe, or eyes can see, / So long lives this, and this gives life to thee.”
Of course, there are just as many counterarguments to the idea that art leads to eternal life. Romantic poet Percy Shelley’s poem “Ozymandias” tells of a wanderer who comes across a “lifeless,” eroded statue in the desert, whose pedestal reads:
My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!
Despite the once-grandness of the statue, “Nothing beside remains. Round the decay / Of that colossal Wreck, boundless and bare / The lone and level sands stretch far away.” Even this mysterious king’s exploits and fame – whatever they might have been – couldn’t save his memory from the ravishes of time. Not only has he died the first time but, as evidenced by the wasteland of his forgotten realm, the second time as well. American filmmaker Woody Allen echoes this sentiment: “I don’t want to achieve immortality through my work. I want to achieve it through not dying.”
But the question remains — is not dying desirable?
If most of us one day have the opportunity to extend our lives indefinitely, how will that change the dynamics of society and culture? A typical person living to 80 years of age goes through several dramatic changes in his lifetime: his opinions and attitudes change, his interests, his friends, his career, sometimes even how he remembers the past. Imagine how much change would take place in a thousand years of life! You wouldn’t be a shadow of the person you once were. Some workers put in 30 or 40 years’ worth of service at a single company or organization, or work in a single industry for as many years, but how dull it would be to continue beyond that. We celebrate when couples reach fifty years of marriage, but could any of them reach 100 years? Two hundred? A thousand? A little over half of marriages end in divorce already. Would couples, knowing that they are going to live for hundreds of years, wed with the firm understanding that they will eventually split? How would immortality affect patriotism?
Let’s pretend for a moment that the Wandering Jew really exists. For close to two thousand years, he has shuffled down countless roads, cane in hand, trying to find some fool to take his place. He clearly cannot be the same person now as he was during the time of the Romans. He’s seen far too much and met far too many people to hold on to whatever prejudices he once had. What “science” he might have believed as a young man has since been obliterated. The language he spoke for centuries, Aramaic, will soon die out. His ancient brand of Jewish is no longer. He claims no country as his own. Having lived to be two thousand years old, he has seen the rise and fall of dozens of nations and empires. He has come to realize the arbitrariness and fragility of borders as well as tribal and national pride.
Leaving aside the unpleasantness of experiencing eternity as a decrepit old man and being charged with the impossible task of giving away your decripitude, what is it about immortality that attracts people so? As Caesar declares in Shakespeare’s play:
The second option to immortality involves uploading our minds onto computer servers, a solution advocated by thinkers such as Kurzweil and Dmitry Itskov. Doing so would immediately eliminate many of the problems outlined above. You need not age in a digital landscape, for one thing. And since you’re whole existence amounts to lines of computer code, you could conceivably “program” yourself to avoid feeling depression, sadness, doubt and other negative emotions.
But there are other problems in this scenario.
If we upload our minds onto computers, we can “live” for as long as we wish, or as long as the data remains properly archived and resistant to fragmentation, viruses and hacking. After all, the official Space Jam website hasn’t aged a day since it launched back in 1996. But even if every last facet of our memories, temperament, interests, dislikes and habits carry over into the merry old land of ones and zeros, are the digital copies really “us” — the essential us — or simply clever simulations? What’s lost, if anything, in the transfer from a carbon-based world to a silicon world? Perhaps the earliest available opportunities to experience immortality will be faulty and disastrous, resulting in regretfully botched versions of our psyches.
Let’s say you upload your mind today. Now there are two “yous,” the analog you and the digital you. After your analog self dies, your digital self “lives” on. It will no doubt continue to assert that it is just as “real” as you ever were because it has the same memories, the same personality, the same tics and religious beliefs and tastes in women (or men, or both). Otherwise, how can it claim to be you? One of the problems here, if indeed there is one, is that you — the meat sack version — won’t survive to enjoy the immortality you’ve passed on to this immaterial copy of yourself.
Is “good enough” simply not good enough?
We place such a high premium on authenticity. Even if the digital copy of yourself is identical in every possible way, it’s still not the “you” that emerged from your mother’s womb. The same argument can be made with regard to art forgeries, some of the best of which are sold at auction as the real deal. Shaun Greenhalgh, possibly history’s most successful art forger, was so good, he managed to dupe both casual and expert art enthusiasts for years and make close to a million pounds before being caught. Anyone who has one of his remarkably convincing pieces sitting in their house — one of his Rodin knockoffs, for instance — is reasonably entitled to tell visitors that they do indeed have a Rodin. There’s nothing about the piece that gives away its deception, other than the abstract notion of its inauthentic origin. But for most people, that’s enough. No matter what the piece looks like, either Rodin sculpted it with his own hands or he didn’t. Similarly, no matter how convincingly “real” a digital life might be, there are those who would refuse such a life because it lacks the nebulous idea of authenticity.
Of course, like Greenhalgh’s Rodin piece, and as we’ve already discussed, there’s no certifiable way to disprove that what you think is reality is actually a fraud. How do you know you’re not already living in a sophisticated computer simulation right now?
Gilgamesh and Qin Shi Huang’s quest for everlasting life might come to a close sometime this century. Before that happens, however, we must discuss the implications and consequences of a world in which death is no longer certain. Emily Dickinson, abandoning the desire to live forever, muses: “That it will never come again is what makes life so sweet.” Since immortality will surely become a reality, we must reassess the sweetness in life.
“We are an equal opportunity employer and do not discriminate against otherwise qualified applicants on the basis of race, color, religion, national origin, age, sex, veteran status, disability, cybernetic augmentation or lack thereof, or any other basis prohibited by federal, state or local law.”
Most of us are accustomed to seeing this equal opportunity clause when we’re filling out job applications — so much so, in fact, that our eyes tend to skim right over it. Chances are, you’ve seen it so often that you completely ignored the first paragraph. But if you go back and read it carefully, you can see what the equal opportunity clause might someday look like.
Yes, you read it right. Get ready to work alongside cyborgs at the office, the shop and the warehouse. Get ready to send your kids off to be taught and babysat by cyborgs. Get ready to engage in water cooler banter with cyborgs, collaborate with cyborgs, attend power meetings with cyborgs and carpool with cyborgs. Get ready to watch laughably sterile corporate videos at your workplace on how to prevent cyborg-discrimination and what to do if you suspect that it’s occurring.
Because inevitably the next major labor rights movement — here in the US and elsewhere around the world — will involve cyborgs in the workplace. To protect them from being denied employment as a result of their modifications, new anti-discrimination laws will need to be passed. Cybernetic implants such as what cyborg-activist Neil Harbisson wears on a regular basis are out of the ordinary, draw attention to their wearers and therefore might alarm potential employers.
Employers might worry, understandably so, that the technology will be used for ulterior purposes other than what the wearer alleges it’s for, lead to workplace rivalries and disputes, create distraction or drive away clients. Let’s be honest here. Not many employers would be too keen on having someone who wears as much hardware as real-life cyborg Steve Mann does work the cash register.
Steve Mann, an inventor and professor at the University of Toronto, is the perfect example of why such laws will be necessary. In July 2012, Mann was physically assaulted in a Paris McDonald’s by one of its employees presumably because the assailant didn’t appreciate his odd appearance. Cyber-hate crimes such as this will surely become more common in the workplace and elsewhere.
Baseline humans who choose to remain cyber-free, or who can’t afford the technology, will also need to be protected, for the opposite reasons. Because they lack whatever skills or enhancements cybernetic humans are granted through wearable or surgically-embedded technology, employers might hesitate to hire them for or promote them to important positions. Let’s say you manage a group of market research analysts. Who would you be more tempted to bring onto your team: a brilliant baseline Harvard graduate? Or a cyborg who has undergone a procedure that boosts his brain’s calculating power to supercomputer levels?
To establish workable, enforceable anti-cyborg-discrimination laws and policies, many questions will first need to be answered.
The most obvious question: what is a cyborg exactly? Generally speaking, a cyborg is a human who has been modified or augmented with some sort of computer, robotic or cybernetic technology. Using this definition, a cyborg is not built from scratch in a manufacturing plant, factory or lab as a robot might, but instead conceived through the union of a human egg and sperm cell. Androids, which are nothing more than sophisticated humanoid robots, probably will not be protected under any sort of anti-discrimination laws — at least not until they are sophisticated enough to demonstrate human-like emotions and self-awareness. Because of advances in artificial intelligence, robotics and the reverse-engineering of the human brain, this looks more and more feasible.
Even so — assuming that we can one day manufacture an android to resemble a human in every conceivable way, to say nothing of why we would ever have the need or desire to create such a being — it’s unclear whether the law would differentiate between a cyborg and android where labor rights and discrimination in the workplace are concerned. If a corporation can gain personhood status and enjoy certain legal rights and protections, why can’t an android? Would it be cruel and unlawful to make an android work around the clock, even if it showed no signs of fatigue?
When does a human become a cyborg? Where’s the line? Are people with pacemakers, hearing aids and electro-hydraulic prosthetic limbs cyborgs?
Right now, owning and using “distracting” wearable computing such as Google Glass isn’t protected by the law because doing so is a lifestyle choice, sort of like having excessive tattoos, which also precludes enthusiasts from certain occupations (though these attitudes are quickly changing). But over the coming years, cybernetic implants and augmentation will increasingly become ubiquitous, available in all flavors and degrees of performance. The more these technologies are accepted and used by a majority of people, for a great number of everyday tasks, the less they will seem like a choice. Instead, they will be viewed as essential tools to maintaining a “normal,” productive life, the same as an automobile, computer or phone. Even though it’s possible, most of us cannot do without a phone of some kind — smartphone or otherwise — and for this reason, the only choice in the matter is what brand of phone to buy and service provider to contract with.
And yet, in 1876, a Western Union internal memo scoffed at the idea that people will have need for them: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”
Or consider this 1943 comment made by Thomas Watson, then-chairman of IBM, who doubted the pervasive need for computers: “I think there is a world market for maybe five computers.”
Or this one by Digital Equipment Corp. founder Ken Olson, as recently as 1977: “There is no reason anyone would want a computer in their home.”
In 1899, the great Irish physicist and engineer William Thompson, Lord Kelvin — who developed the precise value of absolute zero, among other scientific contributions — strung together a staggering list of boneheadedly inaccurate predictions: “Radio has no future. Heavier-than-air flying machines are impossible. X-rays will prove to be a hoax.”
Wrong. Wrong. Wrong.
And so it will be with cybernetic implants and augmentations. Most people now doubt that such things will possibly become mainstream, but as we’ve seen again and again, exciting new technologies tend to fill lifestyle gaps we never knew existed.
Workers in the US are protected in a number of ways. But if employers are required not to discriminate against those with a certain religious preference, which is very much a lifestyle choice, unlike age, sex and race, then perhaps cyborgs will one day have their rights addressed as well.
I believe that being a cyborg is a feeling, it’s when you feel that a cybernetic device is no longer an external element but a part of your organism. ~Neil Harbisson
We know why rain falls from the sky and how distant stars are born. We know the exact height of our planet’s tallest peak and the depth of its deepest ocean. We know that all the world’s landmasses split asunder eons ago from one super-continent and that human beings share a common ancestor with apes. We know why whooping cranes migrate, why salmon swim upstream, and why bats hang upside down. We know that the planet Mercury’s core accounts for about 42 percent of its volume and that the surface temperature of Neptune’s moon Triton plunges to as low as -234 degrees Celsius. We know how to split the atom and unleash unimaginable carnage.
Taking into account all the discoveries we’ve made over the past 2,000 years, it’s amazing that what we know least about is, well, us — specifically, the human brain or, as President Barack Obama describes it, the “three pounds of matter that sits between our ears.”
But that will soon change. (And by “soon,” we mean sometime within the next decade.) The president recently unveiled details of an ambitious new plan to map the human brain. According to the White House’s website, this $100 million undertaking might lead to much-needed benefits such as better treatments or even cures for neurological and emotional disorders, including Parkinson’s, PTSD, traumatic brain injury and bipolar disorder. Although much further in the future, the research might also lead to some sort of advanced human-computer language interface.
The official title for the project is — take a deep breath — the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Its ultimate goal is to “produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space.” Furthermore, it aims to determine how exactly “the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.”
This is exciting news indeed, as the NIH Brain Initiative could very well end up being Obama’s Apollo 11 moon landing or Human Genome Project — to name only two similarly bold, landmark scientific and exploratory projects pushed by Presidents Kennedy and Clinton. Basically what we’re talking about here is reverse-engineering the human brain. By devising a map that explains how the 100 billion neurons in our brains connect, behave and operate, we’ll finally begin to approach an understanding of us that rivals the extent of what we know about the carbon cycle, the mating habits of the great white shark and the composition of Martian soil.
Besides practical applications, the NIH Brain Initiative will hopefully give us answers to questions, both profound and trivial, that have stumped even the greatest minds. For instance:
Why do we blush when we feel embarrassed or ashamed? What’s the evolutionary purpose of laughing and expressing humor? Why are yawns contagious? Why do we dream, and why are they sometimes so vivid and lucid as to seem as real as “reality”? Why did every primitive culture develop the idea of divine beings, and why do so many millions of people continue to subscribe to the cults built around them? What is consciousness exactly, and why must it be tied to one single person at all times? How can our brains be so goddamn complex — the best of which are able to devise new poetic forms and musical genres, theorize the existence of dark matter and sketch an accurately-detailed mural of the New York City skyline from memory — yet they are so clunky and inefficient that we often have difficulty recalling where we left our car keys or what we just read?
Practical results of this years-long study will not come overnight. Hopefully the BRAIN Initiative’s efforts will lead to new treatments and cures of neurological and neurodegenerative disorders and help us become happier, healthier beings. Besides that, who knows what else we might find buried deep in the coffers of the three pounds of matter that sits between our ears? Just as the Human Genome Project has led to advancements in molecular medicine, DNA forensics and bioarchaeology, the NIH’s research will likely have major neurological and societal implications that will change the face of humanity forever.
When we hear the word “cyborg,” we think of an emotionless being that has completely lost or was never granted its individuality or right to privacy. We think of the worst kind of collectivist entrapment, a state of perpetual mindlessness that seeks only to follow directives passed down from some higher authority. We think of the Terminator, Robocop and Star Trek’s Seven of Nine.
That being said, the negative attitude we harbor toward the idea of cyborgs has led to a massive backlash against Google Glass, which many people feel is an assault on privacy and individuality. An advocacy group, Stop the Cyborgs, is in fact campaigning to limit the use of intrusive devices such as Google Glass with the intent to “stop a future in which privacy is impossible and central control total.” Likewise, some businesses have already banned it from being worn on their premises. The first such establishment, the 5 Point Cafe in Seattle — which describes Google Glass as a “new fad for the fanny-pack wearing never removing your bluetooth headset wearing crowd” — has now aligned itself with Star Wars’s droid-hating Mos Eisley Cantina.
It should be noted that the 5 Point Cafe’s banning of Google Glass is done somewhat out of respect for its patrons’ right to privacy, somewhat to be sardonic, somewhat to rabble-rouse and attract media attention — but mostly because the thing looks, well, dumb. Its faux-futuristic, Apple Store aesthetic doesn’t fit in with the cafe’s Seattle counterculture, hole-in-the-wall reputation. Their slogan, after all, is “Alcoholics serving alcoholics since 1929.”
The 5 Point Cafe’s disapproval of Google Glass also says a lot about the majority of Americans’ attitudes toward what they perceive as a gradual loss of privacy and individual freedoms due to technological intrusion. Smartphones are just as guilty of this as Google Glass, but the latter’s always-visible, always-on, always-pointed-at-you functionality crosses a line that makes many people uncomfortable. We just want to be left the hell alone. The idea of being secretly filmed — by any device, for any reason — makes us squirm, even though we’re knowingly caught on surveillance cameras dozens if not hundreds of times a day. We desire privacy and respect for what makes each of us unique, and when we don’t get it, we feel less-than-human. We feel as if we’re being treated like an animal.
Or worse, we feel as if we’re being treated like a cyborg, which is essentially a tool. And since tools don’t receive empathy or privacy, neither should a cyborg.
So maybe this is why the Mos Eisley Cantina’s barkeep gets all huffy when Luke tries to enter with his recently acquired droids. Although they appear to have emotions and personalities, C-3PO and R2-D2 are really cybernetic frauds, artificial charlatans trying vainly to pass themselves off as equals to other Cantina patrons. It’s an insult. To the surly proprietor, droids’ transparent mimicry of self-awareness and entitlement to certain rights sentient beings enjoy mocks the privilege of actually being a sentient being.
This repulsion toward androids and cyborgs can be described as the uncanny valley effect, first described by roboticist Masahiro Mori in 1970. Simply put, when we are confronted with a robot that resembles a human but doesn’t get human behavior quite right — their eyes might not blink like ours or their movements might appear too jerky or calculated — it creeps us out.
And so it is with Google Glass. When we eventually start seeing people on the streets wearing Google Glass, it will surely give some observers unease and skepticism.
They might ask: What are they doing with that thing? Am I being recorded or filmed? When I speak to them, are they tuning me out by listening to music, watching a movie or checking the weather forecast? Are they mentally correcting my factual errors using Wikipedia without my knowledge? Are they using face-recognition technology to scan and analyze me? Do they know all about me — my name, my Social Security number, my past, my secrets?
What part of their humanity and uniqueness did they have to give up to enjoy the benefits of Google Glass?
As admirable as Stop the Cyborgs and 5 Point Cafe’s efforts may be, there’s little hope that the cyborg-ification of humans will stop. No child wants to grow up to be a cyborg, yet humanity is increasingly becoming cybernetic. Many people cannot reasonably function without the use of hearing aids, artificial hips, mind-controlled prosthetic limbs or computerized speech generators. These devices are necessities, and no one faults their users for taking advantage of them. Google Glass is admittedly a different beast altogether, as it is an elective tool and could be used to violate non-wearers’ privacy.
But right or wrong, it’s only the beginning. From retinal implants that perform the same tasks as Google Glass and more, to telekinetic tattoos and nanobots, we’ll be so hard-wired with tech that, as futurists such as Kurzweil predict, the line separating man and machine will blur.
By then, will we even care about abstract liberties such as privacy and individuality?
It’s almost impossible to fathom now, but perhaps in the future we’ll look back and wonder why we cherished our individuality so much and resisted collectivism. After all, privacy as we now know it is a relatively modern phenomenon that we take for granted. Most of us wouldn’t be able to tolerate the constant physical togetherness and lack of solitude that defined a medieval European lifestyle. But since then we’ve readjusted our attitudes toward privacy and individuality, and chances are they will need to be readjusted again. Perhaps once most of us are wired to communicate telepathically and always be aware of each other’s locations and identities, we’ll find popular twentieth- and twenty-first-century depictions of cyborgs to be quaint, naïve and, yes, even a little offensive.
Might as well get your taste buds ready now.
Short of providing every family with a Star Trek food-replicator, how else can we adequately feed a population that is likely to exceed nine billion by the year 2050?
Like it or not, bugs are coming to a dinner plate near you. And restaurant and grocery store and street food vendor.
Hundreds of cultures around the world already do as Simba must — have always done so, in fact. But here in Western society, insects and arachnids carry a certain stigma that will undoubtedly require a generation or two to eliminate or at least diminish before people even consider voluntarily placing one in their mouths. Hell, many struggle to summon the will to get close enough to a cockroach to stomp on it. Up until now, entomophagy — or bug-eating — has been associated with mental illness (in Bram Stoker’s Dracula, the delusional inmate Renfield feasts on flies and spiders), survivalists’ last resort before starving to death and sadistic American game shows.
And of course who can forget the nauseating dinner scene in Indiana Jones and the Temple of Doom?
And then this guy:
To put it mildly, convincing Europeans and Americans that consuming pests is in their best interest will be no easy task. Advocates of insect-eating might as well try convincing them to eat glass. There’s just something intrinsically, inherently icky about these creatures that prevents most everyone from entertaining the thought of ingesting them — even to stay alive. It’s not a stretch to say that some people, if given the choice between eating only bugs or nothing, would sooner choose to die of malnourishment.
But that’s mainly due to the fact that we’ve been conditioned and programmed to believe that insects and arachnids are vile, disgusting creatures that root in dog shit — as if pigs don’t do the same thing. And yet many people are of the opinion that bacon is basically meat candy.
One of the problems is that usually whenever entomophagy is portrayed in films or on TV, the critters are uncooked and sometimes still alive — segmented little bodies writhing about, spindly legs twitching, antennae groping, wings thrumming, blank soulless eyes staring. No wonder the idea disgusts people.
On the contrary, prepared and cooked insects and arachnids, besides being nutritious and plentiful, showcase a cornucopia of flavors that people already enjoy. Scorpions taste like shrimp. Termites taste like carrots. Huhu grubs taste like peanut butter. Palm weevil larvae taste like meat candy, or bacon. There’s no reason to think that, were entomophagy to catch on here in the US, everyone would be slurping down live wriggling caterpillars like Simba.
(Coincidentally, caterpillars provide more iron and protein than an equivalent serving of minced meat. That’s pretty good, considering that they subsist on leaves and flowers, which humans don’t eat. Livestock in the US alone, on the other hand, annually gorge on enough grains to feed an estimated 840 million people, more than the entire population of Europe.)
Outside of the vegetarian/vegan crowd, there’s nothing culturally repugnant about eating chicken tenders, which don’t resemble the feathered animal at all. But would you eat a raw chicken? A live chicken? If this were the most widely-depicted way to eat poultry, as it is with bugs, chicken probably wouldn’t appear on too many menus, and the owners of Chick-fil-a would have to find some other means to finance their bigotry against the LGBT community.
Raw fish, by the way, is eaten by thousands everyday without their being repulsed by it — though many are and refuse to touch the stuff. Like bugs, sushi was until recently a weird taboo foreign “food” in Europe and America, not achieving mainstream status until at least the late 1980s. In the 1950s, in fact, the US Embassy in Japan advised visiting Americans to avoid eating uncooked fish — not because of any evidence that sushi was harmful necessarily but because the idea of raw fish was, to Westerners, culturally revolting and barbaric. Sushi was the foodstuff of sick, uneducated and desperate peasants who knew no better than to shovel unsanitary fish-flesh down their gullets.
How attitudes have changed. Sixty years later, no upscale suburb in the US would be complete without at least half a dozen sushi joints.
Half a century from now, will “insectarias” populate our commercial centers? After getting your nails done or hair trimmed, will you think nothing of popping into the nearby “bug bar” for a scrumptious bag of cricket-kabobs and mantis-snaps?
Before you answer “hell no,” consider that you’ve already eaten literally hundreds of bugs today without realizing it — maggots, ants, aphids, mites, fruit flies. Whether your food came canned, frozen, bagged, processed or directly off the vine, you most certainly have ground up untold insect particles between your molars and swallowed them down.
Perhaps it’s time we start acknowledging insects and arachnids for what they are: an abundant, renewable, inexpensive, nutritious and — when cooked — delicious food source.
If you grew up in the 80s, you might remember a TV show called Tales from the Darkside. It was little more than a poor man’s Twilight Zone, but occasionally an episode aired that surprised and shocked you.
One such episode was titled “Mookie and Pookie.” I know — terrible names, but it gets better. The episode features two teenage twins, one of whom, Mookie, is dying from a terminal illness. In his few remaining days, he frantically works to complete the instructions for a sophisticated computer program that he makes Pookie promise she will carry out after his death. Once he dies, Pookie keeps her promise and obsessively follows her deceased brother’s instructions, buying exotic computer parts, assembling them, writing code. She does this despite not having a clue what might be the result and despite her parents’ insistence that she’s wasting her time and money on a project conceived of out of desperation. So the day arrives when she finishes the final step, and after she eagerly boots up the mystery machine, she hears the voice of — presto! — her late brother Mookie. He had risen from the dead! Sort of. Buried somewhere in the ones and zeros and computer circuitry is his consciousness, as present and aware as any healthy teenager — sans physical body.
The episode ends not with newly-digitized Mookie taking over the world’s electric and information infrastructure, but on a warm note with the entire family, computer-boy included, playing a round of Scrabble.
As hokey as Mookie and Pookie’s story is, cybernetic immortality might very well become a reality. Dmitry Itskov, a Russian businessman and founder of Initiative 2045, is currently seeking investors to fund research that will lead to eternal life — with a catch. The catch, of course, is that your body does not persist indefinitely; instead, your consciousness — what makes you you — lives on in a cybernetic Matrix-like environment.
But what’s a body other than a sack of meat to encase one’s consciousness?
That’s the official stance, at least, of Initiative 2045, whose main scientific goal is to “create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”
To repeat: a “more advanced non-biological carrier.” The explicit assumption is that what millions of years of biological evolution have granted us is vastly, unfathomably inferior to what a few short decades of computer research can achieve. Which is an amazing testament to human intelligence and ingenuity.
Initiative 2045 sees immortality as entirely plausible, a scientific problem that requires a gradual series of intermediary “trans-humanistic transformations,” starting with the replacement of body parts — limbs as well as organs — with non-biological, cybernetic components… and ultimately ending with the replacement of our meat sacks with ones and zeros.
This step-by-step process is analogous to futurist Ray Kurzweil’s concept of the “bridge to bridge” path to immortality, which is why he allegedly takes between 180 and 210 vitamin and mineral supplements a day: to sustain his carbon-based body long enough to see the day when he no longer needs his carbon-based body. Such a radical change in human existence — when shuffling off our mortal coils results not in our deaths but our cybernetic rebirths — unquestionably qualifies as a Singularity event.
As exciting as this all sounds, what remains to be answered by Dmitry Itskov and others is the existential nature of a life lived in cyberspace. What will people “do” with their time — infinite time for that matter? Will we fall in love, have families, go to work, play Scrabble? Will it be necessary to emulate a “normal” life, complete with the laws of physics and the need to eat and sleep? All we “know” is what we’ve seen in sci-fi classics such as William Gibson’s groundbreaking cyberpunk novel Neuromancer and the films Tron and The Matrix. But of course sci-fi tends to exaggerate the implications of speculative technology. Maybe cyberspace will end up as ho-hum as normal space often is.
Or maybe we’re already living in a computer simulation, as many have earnestly theorized. How would we know? After all, what we think of as “reality” is nothing more than a sophisticated construct our minds have created based on sensory data. Colors, sounds, flavors, pain, euphoria — these are all interpretations of the world beyond our senses. At the rate computer science is accelerating, it’s perfectly plausible to imagine an advanced human culture with the capability and means to replicate the experience of, well, life.
Consequently, if we are indeed living in a future culture’s simulation and, while in that simulation, devise a way to upload our consciousnesses in a separate cyberspace, there’s no end to the levels of Inception-like simulations we’re simultaneously experiencing.
Let’s just hope that at least one of them is more interesting than an afternoon playing Scrabble with our folks.
Man is born free, and everywhere he is in chains. ~Jean-Jacques Rousseau (The Social Contract)
* * *
From the first day to this, sheer greed was the driving spirit of civilization. ~Friedrich Engels
* * *
Look, a guy who builds a nice chair doesn’t owe money to everyone who ever has built a chair, okay? ~Mark Zuckerberg (played by Jesse Eisenberg) in The Social Network (2010)
* * *
You see, money doesn’t exist in the twenty-fourth century. The acquisition of wealth is no longer the driving force in our lives. We work to better ourselves and the rest of humanity. ~Capt. Jean-Luc Picard, explaining to a twenty-first-century woman how the “economics of the future” differ from hers
It’s often been said that humans are the only species who pay to live on Earth. Before we emerge from our mothers’ wombs — indeed, even while we patiently gestate inside our mothers’ uteri — we have already assured our parents an often insurmountable heap of expenses and debt for which they are responsible: food and clothes and toys and books and medical attention and hobbies and extracurricular activities and cars and higher education. By the time she turns 17, a child born in 2011 will have cost an average middle-class family $234,900. For no other reason than she came flailing and screaming into the world.
Will our penchant for commodifying every last scrap of our existence still remain strong in the year 2099? For how many more decades will humanity tolerate being enslaved by an imaginary, man-made monetary system that favors the very few, just as feudalism did centuries ago?
European feudalism, of course, lasted only 700 or 800 years before gradually giving way to what we now call capitalism — a term popularized by socialist Karl Marx, of all people. And like feudalism, modern capitalism has its roots in human bondage. Its success as the prevailing social system in the Western world would have been very difficult indeed had it not been for the lucrative human trafficking business. In fact, large American banking corporations such as the Warren Buffet-run Berkshire Hathaway, the now-defunct Lehman Brothers, JPMorgan Chase and Wachovia all came to prominence as a result of their involvement, one way or another, in the African slave trade.
Nearly 150 years after the abolishment of slavery, the business model of commodifying human life still persists. We often talk about how much this celebrity or that politician is worth, as if monetary wealth is a person’s ultimate defining characteristic. We put a price on basic human needs such as food, shelter and health care — a price that’s too frequently beyond the means of many families. Around the world, human trafficking remains a thriving industry. We’ve even gone so far as to grant legal personhood to corporations.
Under the capitalistic model, people are commodities, and commodities are people.
But like feudalism, capitalism will one day buckle under the weight of its many inherent shortcomings. A system that sets arbitrary value on goods, services and people is doomed to fail.
The question is: What will take its place? If capital and the drive to acquire wealth no longer exist in Capt. Picard’s twenty-fourth century, how are goods and services exchanged? What motivates people to go to work and be productive members of society when imaginary Monopoly money is no longer the reward?
As always, the Future Culturalist is mum on details regarding the nature of the economy in the year 2099.
Linux and Wikipedia
For solutions on how to move past a capitalistic social system, we might look at the thriving world of open source software. Millions upon millions of people solely use the free operating system Linux as an alternative to the pricey and, many would argue, inferior Microsoft Windows. People don’t necessarily use Linux because it’s free; they use it because the source code is open to the public, allowing for greater creativity, innovation and collaboration than you can find on the corporate-owned Windows.
Another good example is Wikipedia, the free online encyclopedia. There are over 4,100,000 articles in the English-language version alone, all of them maintained and contributed to by “ordinary” users. In the past, Wikipedia has been criticized for allowing baseless or false information to appear on its site, but vigilant contributors tend to correct the work of Wikipedia “vandals” pretty quickly.
That collaborators will never receive any monetary compensation or royalties doesn’t stop them from modifying and improving Linux and Wikipedia. They work, as Capt. Picard says, to better themselves as well as humanity. It’s the take-a-penny-leave-a-penny model of innovation that attracts such people.
Some work to improve open source software. Some simply use it. Those who try to abuse it — and many do — are outed and ultimately disbarred from participating. Everyone benefits.
Andre Charland, an Internet software developer, had this to say after open-sourcing one of his company’s products: “You can’t do it soon enough. You’ll be blown away by how much better your code gets and how much more quickly you can reach a broader audience.”
Case in point: those who use Linux know how much more efficiently it runs than Windows. And because of the success of Wikipedia, when did you last use a paper-bound encyclopedia?
How then can we emulate this model in our general social system? What would an open source economy look like? How would it function?
In Star Trek, people of the future routinely enjoy the convenience of replicators, which are machines that can synthesize pretty much anything you want them to by rearranging subatomic particles into food, water, clothes, toys, spare parts and much more.
How about paper money? A pile of gold Krugerrands? A $1 billion check issued by the IRS?
This is forgery, of course, but no doubt anyone with a replicator would use it as his or her own ATM machine. After all, entertainment and software companies have tried cracking down on pirating and illegal file-sharing, but they still lose billions annually, with the amount likely to rise. The problem will only increase with the availability of 3D printers, which are simply precursors to replicators.
It’s apparent that, were everyday folk permitted to own powerful Star Trek-caliber replicators, it would spell the end of the economy as we know it — or at least of physical currency of any kind. Perhaps this is why the acquisition of wealth is no longer important in Picard’s time. With wealth available to all as plentifully as oxygen, it loses its uniqueness and desirability (and “wealth” here means anything of value, not just currency). Consequently, there would be no reason to work in exchange for wealth.
Think of all the companies and businesses that would instantaneously be rendered obsolete. If you scroll through the Fortune 500 for 2012 — a list of the U.S.’s most profitable companies — you’ll find it populated by corporations whose goods and services many of us can’t easily do without: oil, food, banking, communications, automobiles, retailing. In the top ten alone, we find four oil companies, two automakers, a tech firm, a massive holding company, a mortgage lender and the world’s largest retailer. The #1 company, ExxonMobil, is worth close to half a trillion. The combined revenue of these ten companies amounts to over $2 trillion — greater than the GDP of many small countries and about an eighth of the U.S.’s debt.
What role would these corporations serve in a world in which replicators were as ubiquitous as cell phones?
Because they hold patents on their intellectual property, they would undoubtedly charge you a service fee for replicating them. Interested in putting genuine Exxon fuel in your car (assuming we still use internal combustion engines)? Service fee. Hungry for a McDonald’s Big Mac? Swipe your card here. Got your eye on the latest Apple iDevice? Pay up.
Already, 3D printers are raising sticky copyright issues. But as more and more people own 3D scanners and printers, the problem will become too great for corporations to manage, as we are currently seeing in the entertainment industry. Even if the devices come with preventive measures, hacking will become widespread.
It’s perfectly conceivable, in fact, that once 3D printers are as powerful as the replicators featured in Star Trek, intellectual property will no longer be relevant. Everything will be open source, readily available and, indeed, modifiable to anyone with access to such technology. Because if acquiring wealth is no longer important, what would motivate an individual or corporation to legally protect intellectual property?
Planet Open Source
The most fervent capitalists will inevitably balk at the idea of an entirely open source world. This is just techno-socialism, they might say. If no one owns his or her ideas, competition would cease and innovation would die. Plus, if no one works to acquire wealth and make a living, idleness and crime will prevail. While these are valid arguments, there are a couple of strong counterarguments.
For the majority of human history, we’ve done without copyrighting, trademarking, patenting and other ways of protecting intellectual property. And yet somehow we mustered the drive and curiosity and ingenuity that’s required to make great strides in every area of human knowledge: science, art, literature, music, metallurgy, woodworking, astronomy, agriculture, fashion. Good thing, too: imagine if Gurg had been allowed to take out a patent on his invention, the wheel. It’s absurd to think about.
Secondly, the availability of replicators would not lead to idleness and crime. In fact, it would have the exact opposite effect on society. Crimes of passion such as rape and murder might still exist, but with everyone’s basic needs met and poverty and desperation eliminated, there would be little reason to steal. You could make the case that crimes would still be committed by those with mental disorders, but replicators would give the afflicted free access to the best medications, so long as they were responsible enough to take them regularly or had the help of a family member or social worker.
And as for idleness: free of the stress and inconvenience of having to work for a business or company that means little to you other than a way to pay the bills, people might then have the time and energy to pursue other goals in life. They could “work to better [themselves] and the rest of humanity,” instead of a corporation’s bottom line. Rather than greed and cutthroat competition, the driving forces in society would be self-improvement and collaboration.
Like all the futuristic technology featured in Star Trek — or any sci-fi, for that matter — replicators seem too distant a notion to ever become a reality. But a reality it will one day become: we are already witnessing its inception with the 3D printer. Coupled with the power of open sourcing, universal replication will help usher in a new kind of economy, one that doesn’t favor the few and necessitate the arbitrary commodification of goods, services and human life.
Any sufficiently advanced technology is indistinguishable from magic. ~Arthur C. Clarke
Let’s just say God works too slow. ~Magneto (Ian McKellen), in 2000’s X-Men
Once the stuff of folk religion, telepathy and telekinesis have in modern times been featured in the pages of comic books as well as on-screen in sci-fi and fantasy films, TV shows and video games.
Now, however, it seems likely that we’ll soon — i.e., in the next 30 years or so — be able to achieve what Stan Lee and other writers cooked up in X-Men, as demonstrated by superhuman mutant characters such as the telekinetic Jean Grey and the telepathic Charles Xavier.
Todd Coleman, an electrical engineer at the University of California in San Diego, is currently working on a temporary tattoo that would allow the wearer to operate machines with his thoughts or speak telepathically with others, presumably so long as they’re also tatted up with Coleman’s tiny device. Using bendable electrode arrays that easily attach to the skin, these so-small-they’re-almost-invisible tattoos pick up mental signals and convert them into commands.
Coleman and his research team hope to market the technology for use in the surgery room as well as the virtual cockpit, meaning drone pilots will one day be able to level Pakistani villages from the comfort of their living rooms. No word yet on whether we’ll be able to remotely control nine-foot blue aliens.
We already routinely practice magical thinking, of course, though not at the scale promised by Coleman’s tattoo tech. Whenever you willfully raise your arm or take a deep breath, you’re moving an object merely by thinking about it, which is the definition of telekinesis. Your thoughts send electrical signals from your brain to your arms, commanding them to rise, or to your lungs, commanding them to expand and receive air. What prevents you from scooting the chair away from the table without touching it is a simple lack of wiring — physical or otherwise — connecting your mind to said chair.
A lack of wiring is also what prevents those stricken with ALS and other neurodegenerative diseases from retaining control of their bodily movements. Help is allegedly on the way this year, though, in the form of thought-controlled robotic limbs, which will finally give paraplegics, quadriplegics and amputees the much-needed ability to perform simple everyday tasks. Coleman’s research could only increase the likelihood that no one unfortunate enough to contract ALS will be confined to a bed for the rest of his or her life. This application seems much more beneficial to humanity than facilitating drone strikes.
* * *
The difference between the right word and the almost-right word is the difference between lightning and the lightning-bug. ~Mark Twain
Whereas telekinesis has the potential to aid amputees and those with neurodegenerative illnesses as well as assist surgeons and pilots, telepathy has the potential to restructure the dynamics of human relationships at every level — parent-child, wife-husband, teacher-student, employer-employee, doctor-patient, ambassador-ambassador.
In fact, telepathy just might be the cure to war, famine and many other afflictions, human-made or otherwise.
For one thing, telepathy seems to be a more effective and efficient means to communicate with other people than verbal or even written language. Either the essence of what we mean is lost in the words, which, unless you’re a world-class poet, are irrefutably inadequate, or we can’t find the right words (and arrangement of words) to satisfactorily convey our message. In many cases, the wrong words can even confuse what we originally meant. Our thoughts, on the other hand, are pure, unfiltered and contain a much richer vocabulary — not just arbitrary, man-made words but also images, emotions, memories and other abstract, albeit meaningful, bits of data that we can never quite seem to translate into human language.
Consider the implications, both good and bad, of having the ability to speak telepathically with others.
With telepathic abilities, world leaders, ambassadors and Congresspeople could discuss important matters with no filters or blinding rhetoric and possibly find common ground. Assuming the beggar on the street corner has telepathic abilities, he could convey his hunger and desperation to passers-by much more effectively than any cardboard sign. A visit to the shrink would certainly be much more helpful than it is now. Having access to memories, emotions and anything else we often have trouble verbalizing would help the psychologist or psychiatrist determine and treat what’s giving us pain.
Telepathy as it’s described here sounds not unlike the Point of View Gun featured in the 2005 film The Hitchhiker’s Guide to the Galaxy. Coincidentally, the gun doesn’t appear in the novel of the same name.
What we fear the most about the idea of telepathy is that we would constantly have “people in our heads,” never giving us a moment’s peace. Or conversely, we’d always be privy to other people’s secrets we’d prefer not to know. Let’s assume for a moment that, were we to have the ability to speak telepathically, it could be shut off like a radio, giving us some level of control. With Facebook, after all, we can pick and choose who gets to see our status updates and whom we want to receive status updates from. Telepathic implants or tattoos might work similarly.
But then, blocking others from gaining entry into our minds would inevitably arouse suspicion.
The X-Men mutants designate themselves as a separate species to differentiate their kind from baseline humans: Homo superior. Able to bend the laws of physics in all imaginable ways, they’ve naturally crossed their own Singularity threshold. But as Arthur C. Clarke’s quote suggests, advanced technology such as we’re likely to witness in the twenty-first century — thanks to the research of Coleman and others — will give us abilities that can only be described as magical.