The One Thing “Back to the Future Part II” Got Right About 2015

When Back to the Future Part II was released in 1989, the year 2015 was a lifetime away. I saw the movie with a group of friends for my twelfth birthday party, and we were floored by the cool technology as it’s depicted in the future scenes. Flying cars! Hoverboards! Power-lacing Nikes! We all agreed: it would be a long wait indeed until 2015.

Here we are, midway through 2013, and so much of what Back to the Future Part II promised has yet to be realized. Like all twelve-year-olds who saw the movie, we were absolutely convinced that, of all the futuristic contraptions and devices, the hoverboard was the most likely to appear on store shelves. In fact, director Robert Zemeckis and his production staff successfully duped many moviegoers into believing that they had invented a real hoverboard specifically for the film.

But alas, no hoverboards. No flying cars. No power-lacing Nikes or self-drying jackets. No holographic movie trailers, food hydrators or retractable fruit buffets. Mercifully so, the movie’s prediction of early-21st-century fashion has remained fictional: no double-ties, no wearing our jeans pockets inside-out, no metallic sunglasses (or whatever the hell those things are that Doc Brown wears).

So what did the movie get right, if anything?

Watch the following clip to find out:

That’s right, Marty’ children are using devices that resemble Google Glass. So in the world of Back to the Future, Google Glass is invented ostensibly before the Internet. That’s not quite as dramatic as inventing aerosol deodorant before the wheel, as the 50-armed Jatravartids do in Hitchhiker’s Guide to the Galaxy, but it’s unbelievable (now) nonetheless.

Had Marty, Doc and Jennifer visited the real 2015 — just two years from now — how would their reactions be different? Marty calls the power-lacing Nikes “far out.” What would he have thought about iPhones, Xboxes and GPSes?

Losing Our Religion: Will Atheism Be the Norm by 2099?

When Friedrich Nietzsche proclaimed that “God is dead” in 1882, it caused a major uproar in his native Germany and the rest of Europe. Although not strictly an atheistic statement — more likely he was criticizing how far removed from the biblical, Abrahamic concept of God our industrialized culture has carried us — his heretical words would doubtlessly have resulted in his execution had he uttered them a few hundred years earlier.

And now, in 2013, a woman can admit on live television that she’s atheist with dignity and total impunity. In fact, her interviewer — in this case, CNN anchor Wolf Blitzer — chuckles at her admission (and perhaps at his own embarrassment for pressing her to “thank the Lord”) and even warmly touches her shoulder.

By now everyone has seen this short exchange between Blitzer and 30-year-old Oklahoman Rebecca Vitsmun, who, along with her 18-month-old son and husband, survived the recent tornado that killed 24 people as of this writing. For such a seemingly unimportant news clip, it has brewed a national dialogue about the state of religion in America and where it is headed as well as given figurative ammo to loud voices on both sides of the culture war:

Atheists: “See? Good things happen even to those who worship no god.”

Believers: “True, but this event will finally compel her to have faith in miracles and guardian angels.”

A: “Miracles, shmiracles. The tornado was a natural event without purpose, and her survival was based purely on hasty decision-making and dumb luck.”

B: “So she survived, but for her blasphemous denial of all that is holy, she will spend eternity in hell.”

A: [eating a sandwich] “She seems like a pretty cool mom.”

B: [turning red in the face] “God set this tornado upon Oklahoma to teach the state’s lone atheist a lesson… and ended up inadvertently taking the lives of her neighbors… while sparing hers and her family’s…”

A: “I guess a burning bush would’ve been too subtle.”

And so on.

Thirteen million. That’s how many atheists and agnostics are estimated to be living now in the U.S. alone. Although a small fraction of the nation’s population — it’s just under 3% — this number rises exponentially every passing year. Why? Is it because of our lack of morals? Our growing disinterest in tradition?

Possibly. But the main culprit is hands-down science.

Over the course of the last four hundred years — since at least Galileo and the dawn of modern science — the work of biologists and physicists has shown us, empirically so, that ours is a universe with no author or purpose. Charles Darwin and later evolutionists such as Stephen Jay Gould proved that we and other species are here not because of some “grand watchmaker” but instead because of natural selection and other complex, observable processes. British physicist Stephen Hawking’s book The Grand Design (2010) and American physicist Lawrence Krauss’s book A Universe from Nothing (2012) both convincingly argue that a god is not required to create a universe or the life that inhabits it. Contemporary research into quantum mechanics suggests that our existence rests on the chaotic behavior of subatomic particles, some of which can inexplicably exist in more than one spot at any given moment. Albert Einstein, in fact, rejected the implications of the randomness and unpredictability that is inherent and essential to quantum theory, saying “God does not play dice.”

"No shit, Einstein. God doesn't play dice because God. Is. Dead."

“No shit, Einstein. Kinda hard to play dice when you’re DEAD.”

As Hawking himself has explained, science has not disproved nor will it ever disprove the existence of God. The role of science is to measure and speculate on that which is observable or at least testable, of which God is neither.

However — and this is a biggie — science shows us more and more that the idea of God is irrelevant. The creation of the universe, the origin of species, the “miraculous” events in our lives such as surviving a tornado — whether you believe in God or not, these things occur everyday without help from a divine being. (If you think ours is the only universe, or that universes can’t spring up randomly, read here.)

“I don’t blame anyone for thanking the Lord,” Rebecca Vitsmun said as she hugged her child closely. What a classy, non-douchey attitude toward religiosity. Perhaps in 2099, when atheists far outnumber believers in the First World, we’ll demonstrate the same sort of anthropological respect toward the biblical concept of an all-powerful, all-knowing god.

After all, this is how we feel about Zeus and Odin and Ra, once-powerful but now “dead” deities. It’s simply evolution for the Abrahamic god to follow.

Breathe Easy: New Scientific Breakthrough Can Save You from Running out of Oxygen

As children we all played the game where we competed to see who could hold their breath longer than anyone else in the pool. Think back. How long could you hold it before the burning sensation compelled you to surface and gulp lungfuls of air? Thirty seconds? A minute? A minute and a half?

As impressive as those times are, you soon might be able to hold your breath for up to 30 minutes without any adverse effects.

Amazingly, scientists at the Boston Children’s Hospital have devised a microparticle that, when injected into the bloodstream, can super-oxygenate a person’s blood and allow them to live for up to 30 minutes without having to take a single breath.

Currently the person with the distinction of holding his breath the longest is German Tom Sietas, who in June 2012 remained underwater for a staggering 22 minutes and 22 seconds. With the new technology, you can hold your breath for a further eight minutes.

Such a scientific and medical breakthrough has countless applications, the most obvious of which is to save lives in emergency rooms and hospitals. Every household’s medicine cabinet might one day store a microparticle-dispenser of some kind next to the Tylenol and cough medicine that can be administered to a family member who happens to get a chicken bone lodged in his throat. Long-distance runners might use the technology to ensure that their blood receives enough oxygen. As a precaution, parents might give their children a shot of super-oxygen before dropping them off at the community pool or taking them out on the fishing boat. People who enjoy erotic asphyxiation — the act of deliberately restricting airflow for sexual pleasure — can have a treatment on hand in case things get too carried away. (Of course, someone else would need to be present to give them the dosage, since they would be unconscious.)

Imagine the military applications. Soldiers who never tire? Navy SEALs who need not surface for air until the most opportune time?

The DC character Aquaman, who can speak telepathically with sea creatures as well as breathe underwater, often gets mocked by readers and geeks for having the least useful and desirable superpowers among his fellow Justice League members.

As funny as the Family Guy clip is, no one would scoff at a person’s amazing ability to hold his breath for half an hour, thereby making him King of the Pool.

But like any new cutting-edge technology, it might take some time before these so-called microparticles are available for general consumption. So, you know, don’t hold your breath.

George_Bush_Holding_Breath

Imminent Immortality: Do You Really Want to Live Forever?

For as long as humans have wandered the earth, our mortality has been front and center in our long list of woes. In every culture, in every age, many people have attempted to cheat death, one of the most famous examples of which includes Qin Shi Huang, king of the Chinese state of Qin in the third century BCE. Obsessed with living forever, he ordered his alchemists and physicians to concoct an elixir of life. They obliged and presented him with what they believed might grant him eternal life. Unfortunately for Qin Shi Huang, what they gave him was a handful of mercury pills, and he died upon consuming them.

Qin

Maybe they were just tired of looking at his douchey headwear and debilitatingly huge shoes.

We’ve come a long way since Qin’s day, so much so that immortality — or at least unprecedented longevity — appears increasingly plausible sometime this century. Inventor and futurist Ray Kurzweil seems so sure of it that he allegedly takes upwards of 200 dietary supplements a day to forge a “bridge to a bridge” when long life is the norm. The May 2013 issue of National Geographic, in fact, features this very topic.

120

For now, however, they say we die twice: once when we take our last breath, and again when our name is uttered for the last time.

Our greatest literature, both ancient and modern, seems to confirm this attitude. Countless examples suggest that as much as we strive to achieve everlasting life, death is our inescapable fate. To seek a loophole is folly and smacks of the worst kind of hubris. The earliest such tale, over twelve thousand years old, relates the ancient Mesopotamian king Gilgamesh’s quest for everlasting life following the death of his friend Enkidu. Although Gilgamesh ultimately fails in his undertaking, he achieves a sort of immortality in the minds of his people as a result of his heroic exploits. The same arrogance is seen in the character of Greek demigod Achilles, who was said to be impervious to harm in all parts of his body except his ankle, which his mother Thetis failed to immerse in the river Styx. Near the end of the Trojan War, he is slain by the lethal accuracy of Paris’s arrow, but Achilles’s courageous feats guarantee that his name lives on into perpetuity.

He wasn't known for his modesty.

One thing he wasn’t known for was his modesty.

For those of us who lack the godlike strength and derring-do of Gilgamesh, Achilles, Heracles and other ancient and Classical heroes, the only hope we have at gaining immortality is through emerging age-reversing technology and research into the human brain. Our two leading options appear to be an indefinite halt to the aging process or a sort of digital resurrection — uploading our minds into vast computer servers. But are either of these options desirable?

The former option, the perpetuation of our corporal bodies, seems at this point to be more scientifically plausible but far less satisfactory. Many stories warn of the dangers of unnaturally extending the shelf-life of our flesh and bones. The legend of the Wandering Jew, for instance, convinces us that everlasting life is a curse, a waking nightmare that results only in unfathomable despair and desperation. According to the legend, the old man scours the world seeking someone who will exchange his mortality for his cursed immortality. For two centuries now, Mary Shelley’s gothic novel Frankenstein; or, the Modern Prometheus has terrified readers with the personal, societal and religious implications of reanimating dead tissue. Alphaville’s 1980s anthem of youth “Forever Young” rejects the notion of immortality for its own sake:

It’s so hard to get old without a cause
I don’t want to perish like a fading horse
Youth’s like diamonds in the sun
And diamonds are forever

Forever young, I want to be forever young
Do you really want to live forever, forever and ever?

What’s the use of everlasting life, Alphaville argues, if we can’t maintain a youthful spirit? Better to die with a hopeful eye on the future than to trudge meaninglessly though eternity.

Immortality without fabulous hair and colorful jumpsuits? No deal!

Immortality without fabulous hair, eye shadow and colorful jumpsuits? No deal!

Poets routinely insist that the only fulfilling way for us to achieve immortality is through our art and innovations. In Shakespeare’s “Sonnet 18,” the speaker promises a youth or possible lover that “thy eternal summer shall not fade, / … Nor shall death brag thou wander’st in his shade.” Because he has composed the sonnet in her honor, her memory will last for as long as the poem exists: “So long as men can breathe, or eyes can see, / So long lives this, and this gives life to thee.”

Of course, there are just as many counterarguments to the idea that art leads to eternal life. Romantic poet Percy Shelley’s poem “Ozymandias” tells of a wanderer who comes across a “lifeless,” eroded statue in the desert, whose pedestal reads:

My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!

Despite the once-grandness of the statue, “Nothing beside remains. Round the decay / Of that colossal Wreck, boundless and bare / The lone and level sands stretch far away.” Even this mysterious king’s exploits and fame – whatever they might have been – couldn’t save his memory from the ravishes of time. Not only has he died the first time but, as evidenced by the wasteland of his forgotten realm, the second time as well. American filmmaker Woody Allen echoes this sentiment: “I don’t want to achieve immortality through my work. I want to achieve it through not dying.”

But the question remains — is not dying desirable?

If most of us one day have the opportunity to extend our lives indefinitely, how will that change the dynamics of society and culture? A typical person living to 80 years of age goes through several dramatic changes in his lifetime: his opinions and attitudes change, his interests, his friends, his career, sometimes even how he remembers the past. Imagine how much change would take place in a thousand years of life! You wouldn’t be a shadow of the person you once were. Some workers put in 30 or 40 years’ worth of service at a single company or organization, or work in a single industry for as many years, but how dull it would be to continue beyond that. We celebrate when couples reach fifty years of marriage, but could any of them reach 100 years? Two hundred? A thousand? A little over half of marriages end in divorce already. Would couples, knowing that they are going to live for hundreds of years, wed with the firm understanding that they will eventually split? How would immortality affect patriotism?

Let’s pretend for a moment that the Wandering Jew really exists. For close to two thousand years, he has shuffled down countless roads, cane in hand, trying to find some fool to take his place. He clearly cannot be the same person now as he was during the time of the Romans. He’s seen far too much and met far too many people to hold on to whatever prejudices he once had. What “science” he might have believed as a young man has since been obliterated. The language he spoke for centuries, Aramaic, will soon die out. His ancient brand of Jewish is no longer. He claims no country as his own. Having lived to be two thousand years old, he has seen the rise and fall of dozens of nations and empires. He has come to realize the arbitrariness and fragility of borders as well as tribal and national pride.

Leaving aside the unpleasantness of experiencing eternity as a decrepit old man and being charged with the impossible task of giving away your decripitude, what is it about immortality that attracts people so? As Caesar declares in Shakespeare’s play:

Of all the wonders that I yet have heard,
It seems to me most strange that men should fear,
Seeing that death, a necessary end,
Will come when it will come.

Digital Rapture

The second option to immortality involves uploading our minds onto computer servers, a solution advocated by thinkers such as Kurzweil and Dmitry Itskov. Doing so would immediately eliminate many of the problems outlined above. You need not age in a digital landscape, for one thing. And since you’re whole existence amounts to lines of computer code, you could conceivably “program” yourself to avoid feeling depression, sadness, doubt and other negative emotions.

But there are other problems in this scenario.

If we upload our minds onto computers, we can “live” for as long as we wish, or as long as the data remains properly archived and resistant to fragmentation, viruses and hacking. After all, the official Space Jam website hasn’t aged a day since it launched back in 1996. But even if every last facet of our memories, temperament, interests, dislikes and habits carry over into the merry old land of ones and zeros,  are the digital copies really “us” — the essential us — or simply clever simulations? What’s lost, if anything, in the transfer from a carbon-based world to a silicon world? Perhaps the earliest available opportunities to experience immortality will be faulty and disastrous, resulting in regretfully botched versions of our psyches.

Something's not... quite... right.

Something’s not… quite… right.

Let’s say you upload your mind today. Now there are two “yous,” the analog you and the digital you. After your analog self dies, your digital self “lives” on. It will no doubt continue to assert that it is just as “real” as you ever were because it has the same memories, the same personality, the same tics and religious beliefs and tastes in women (or men, or both). Otherwise, how can it claim to be you? One of the problems here, if indeed there is one, is that you — the meat sack version — won’t survive to enjoy the immortality you’ve passed on to this immaterial copy of yourself.

Is “good enough” simply not good enough?

We place such a high premium on authenticity. Even if the digital copy of yourself is identical in every possible way, it’s still not the “you” that emerged from your mother’s womb. The same argument can be made with regard to art forgeries, some of the best of which are sold at auction as the real deal. Shaun Greenhalgh, possibly history’s most successful art forger, was so good, he managed to dupe both casual and expert art enthusiasts for years and make close to a million pounds before being caught. Anyone who has one of his remarkably convincing pieces sitting in their house — one of his Rodin knockoffs, for instance — is reasonably entitled to tell visitors that they do indeed have a Rodin. There’s nothing about the piece that gives away its deception, other than the abstract notion of its inauthentic origin. But for most people, that’s enough. No matter what the piece looks like, either Rodin sculpted it with his own hands or he didn’t. Similarly, no matter how convincingly “real” a digital life might be, there are those who would refuse such a life because it lacks the nebulous idea of authenticity.

Of course, like Greenhalgh’s Rodin piece, and as we’ve already discussed, there’s no certifiable way to disprove that what you think is reality is actually a fraud. How do you know you’re not already living in a sophisticated computer simulation right now?

Gilgamesh and Qin Shi Huang’s quest for everlasting life might come to a close sometime this century. Before that happens, however, we must discuss the implications and consequences of a world in which death is no longer certain. Emily Dickinson, abandoning the desire to live forever, muses: “That it will never come again is what makes life so sweet.” Since immortality will surely become a reality, we must reassess the sweetness in life.

Cyborgs in the Workplace: Why We Will Need New Labor Laws

“We are an equal opportunity employer and do not discriminate against otherwise qualified applicants on the basis of race, color, religion, national origin, age, sex, veteran status, disability, cybernetic augmentation or lack thereof, or any other basis prohibited by federal, state or local law.”

Most of us are accustomed to seeing this equal opportunity clause when we’re filling out job applications — so much so, in fact, that our eyes tend to skim right over it. Chances are, you’ve seen it so often that you completely ignored the first paragraph. But if you go back and read it carefully, you can see what the equal opportunity clause might someday look like.

Yes, you read it right. Get ready to work alongside cyborgs at the office, the shop and the warehouse. Get ready to send your kids off to be taught and babysat by cyborgs. Get ready to engage in water cooler banter with cyborgs, collaborate with cyborgs, attend power meetings with cyborgs and carpool with cyborgs. Get ready to watch laughably sterile corporate videos at your workplace on how to prevent cyborg-discrimination and what to do if you suspect that it’s occurring.

Because inevitably the next major labor rights movement — here in the US and elsewhere around the world — will involve cyborgs in the workplace. To protect them from being denied employment as a result of their modifications, new anti-discrimination laws will need to be passed. Cybernetic implants such as what cyborg-activist Neil Harbisson wears on a regular basis are out of the ordinary, draw attention to their wearers and therefore might alarm potential employers.

Neil Harbisson's eyeborg grants him synesthetic abilities such as "hearing" colors as well as seeing colors that baseline humans cannot perceive. His Cyborg Foundation helps promote and defend cyborg rights.

Neil Harbisson’s eyeborg grants him synesthetic abilities such as “hearing” colors as well as seeing colors that baseline humans cannot perceive. His Cyborg Foundation helps promote and defend cyborg rights.

Employers might worry, understandably so, that the technology will be used for ulterior purposes other than what the wearer alleges it’s for, lead to workplace rivalries and disputes, create distraction or drive away clients. Let’s be honest here. Not many employers would be too keen on having someone who wears as much hardware as real-life cyborg Steve Mann does work the cash register.

"Hi, I'm here to apply for the school counselor position."

“Hi, I’m here to apply for the school counselor position.”

Steve Mann, an inventor and professor at the University of Toronto, is the perfect example of why such laws will be necessary. In July 2012, Mann was physically assaulted in a Paris McDonald’s by one of its employees presumably because the assailant didn’t appreciate his odd appearance. Cyber-hate crimes such as this will surely become more common in the workplace and elsewhere.

Baseline humans who choose to remain cyber-free, or who can’t afford the technology, will also need to be protected, for the opposite reasons. Because they lack whatever skills or enhancements cybernetic humans are granted through wearable or surgically-embedded technology, employers might hesitate to hire them for or promote them to important positions. Let’s say you manage a group of market research analysts. Who would you be more tempted to bring onto your team: a brilliant baseline Harvard graduate? Or a cyborg who has undergone a procedure that boosts his brain’s calculating power to supercomputer levels?

To establish workable, enforceable anti-cyborg-discrimination laws and policies, many questions will first need to be answered.

The most obvious question: what is a cyborg exactly? Generally speaking, a cyborg is a human who has been modified or augmented with some sort of computer, robotic or cybernetic technology. Using this definition, a cyborg is not built from scratch in a manufacturing plant, factory or lab as a robot might, but instead conceived through the union of a human egg and sperm cell. Androids, which are nothing more than sophisticated humanoid robots, probably will not be protected under any sort of anti-discrimination laws — at least not until they are sophisticated enough to demonstrate human-like emotions and self-awareness. Because of advances in artificial intelligence, robotics and the reverse-engineering of the human brain, this looks more and more feasible.

Robot flowchart

Even so — assuming that we can one day manufacture an android to resemble a human in every conceivable way, to say nothing of why we would ever have the need or desire to create such a being — it’s unclear whether the law would differentiate between a cyborg and android where labor rights and discrimination in the workplace are concerned. If a corporation can gain personhood status and enjoy certain legal rights and protections, why can’t an android? Would it be cruel and unlawful to make an android work around the clock, even if it showed no signs of fatigue?

MR. POTATO HEAD: Woo-hoo! Five o'clock! So am I free to go? ENGINEER: No. MR. POTATO HEAD: But Mrs. Potato Head and I -- ENGINEER: I said NO!

MR. POTATO HEAD: Woo-hoo! Five o’clock! It’s Miller Time!
ENGINEER: Not tonight it ain’t.
MR. POTATO HEAD: But Mrs. Potato Head and I —
ENGINEER: I said NO!

When does a human become a cyborg? Where’s the line? Are people with pacemakers, hearing aids and electro-hydraulic prosthetic limbs cyborgs?

Right now, owning and using “distracting” wearable computing such as Google Glass isn’t protected by the law because doing so is a lifestyle choice, sort of like having excessive tattoos, which also precludes enthusiasts from certain occupations (though these attitudes are quickly changing). But over the coming years, cybernetic implants and augmentation will increasingly become ubiquitous, available in all flavors and degrees of performance. The more these technologies are accepted and used by a majority of people, for a great number of everyday tasks, the less they will seem like a choice. Instead, they will be viewed as essential tools to maintaining a “normal,” productive life, the same as an automobile, computer or phone. Even though it’s possible, most of us cannot do without a phone of some kind — smartphone or otherwise — and for this reason, the only choice in the matter is what brand of phone to buy and service provider to contract with.

And yet, in 1876, a Western Union internal memo scoffed at the idea that people will have need for them: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”

Or consider this 1943 comment made by Thomas Watson, then-chairman of IBM, who doubted the pervasive need for computers: “I think there is a world market for maybe five computers.”

Or this one by Digital Equipment Corp. founder Ken Olson, as recently as 1977: “There is no reason anyone would want a computer in their home.”

In 1899, the great Irish physicist and engineer William Thompson, Lord Kelvin — who developed the precise value of absolute zero, among other scientific contributions — strung together a staggering list of boneheadedly inaccurate predictions: “Radio has no future. Heavier-than-air flying machines are impossible. X-rays will prove to be a hoax.”

Wrong. Wrong. Wrong.

"Shut up."

“Shut your potato hole, why don’t ye, ‘fore I stick me boot up your arse!”

And so it will be with cybernetic implants and augmentations. Most people now doubt that such things will possibly become mainstream, but as we’ve seen again and again, exciting new technologies tend to fill lifestyle gaps we never knew existed.

Workers in the US are protected in a number of ways. But if employers are required not to discriminate against those with a certain religious preference, which is very much a lifestyle choice, unlike age, sex and race, then perhaps cyborgs will one day have their rights addressed as well.

_________

I believe that being a cyborg is a feeling, it’s when you feel that a cybernetic device is no longer an external element but a part of your organism. ~Neil Harbisson

Obama Wants Your Brain: Reverse-Engineering the Human Mind

We know why rain falls from the sky and how distant stars are born. We know the exact height of our planet’s tallest peak and the depth of its deepest ocean. We know that all the world’s landmasses split asunder eons ago from one super-continent and that human beings share a common ancestor with apes. We know why whooping cranes migrate, why salmon swim upstream, and why bats hang upside down. We know that the planet Mercury’s core accounts for about 42 percent of its volume and that the surface temperature of Neptune’s moon Triton plunges to as low as -234 degrees Celsius. We know how to split the atom and unleash unimaginable carnage.

Taking into account all the discoveries we’ve made over the past 2,000 years, it’s amazing that what we know least about is, well, us — specifically, the human brain or, as President Barack Obama describes it, the “three pounds of matter that sits between our ears.”

But that will soon change. (And by “soon,” we mean sometime within the next decade.) The president recently unveiled details of an ambitious new plan to map the human brain. According to the White House’s website, this $100 million undertaking might lead to much-needed benefits such as better treatments or even cures for neurological and emotional disorders, including Parkinson’s, PTSD, traumatic brain injury and bipolar disorder. Although much further in the future, the research might also lead to some sort of advanced human-computer language interface.

The official title for the project is — take a deep breath — the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Its ultimate goal is to “produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space.” Furthermore, it aims to determine how exactly “the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.”

This is exciting news indeed, as the NIH Brain Initiative could very well end up being Obama’s Apollo 11 moon landing or Human Genome Project — to name only two similarly bold, landmark scientific and exploratory projects pushed by Presidents Kennedy and Clinton. Basically what we’re talking about here is reverse-engineering the human brain. By devising a map that explains how the 100 billion neurons in our brains connect, behave and operate, we’ll finally begin to approach an understanding of us that rivals the extent of what we know about the carbon cycle, the mating habits of the great white shark and the composition of Martian soil.

Neural Connections In the Human Brain

Neural connections in the human brain.

Besides practical applications, the NIH Brain Initiative will hopefully give us answers to questions, both profound and trivial, that have stumped even the greatest minds. For instance:

Why do we blush when we feel embarrassed or ashamed? What’s the evolutionary purpose of laughing and expressing humor? Why are yawns contagious? Why do we dream, and why are they sometimes so vivid and lucid as to seem as real as “reality”? Why did every primitive culture develop the idea of divine beings, and why do so many millions of people continue to subscribe to the cults built around them? What is consciousness exactly, and why must it be tied to one single person at all times? How can our brains be so goddamn complex — the best of which are able to devise new poetic forms and musical genres, theorize the existence of dark matter and sketch an accurately-detailed mural of the New York City skyline from memory — yet they are so clunky and inefficient that we often have difficulty recalling where we left our car keys or what we just read?

BABY READING

Practical results of this years-long study will not come overnight. Hopefully the BRAIN Initiative’s efforts will lead to new treatments and cures of neurological and neurodegenerative disorders and help us become happier, healthier beings. Besides that, who knows what else we might find buried deep in the coffers of the three pounds of matter that sits between our ears? Just as the Human Genome Project has led to advancements in molecular medicine, DNA forensics and bioarchaeology, the NIH’s research will likely have major neurological and societal implications that will change the face of humanity forever.

How I Learned to Stop Worrying and Love the Cyborg

When we hear the word “cyborg,” we think of an emotionless being that has completely lost or was never granted its individuality or right to privacy. We think of the worst kind of collectivist entrapment, a state of perpetual mindlessness that seeks only to follow directives passed down from some higher authority. We think of the Terminator, Robocop and Star Trek’s Seven of Nine.

cyborgs

That being said, the negative attitude we harbor toward the idea of cyborgs has led to a massive backlash against Google Glass, which many people feel is an assault on privacy and individuality. An advocacy group, Stop the Cyborgs, is in fact campaigning to limit the use of intrusive devices such as Google Glass with the intent to “stop a future in which privacy is impossible and central control total.” Likewise, some businesses have already banned it from being worn on their premises. The first such establishment, the 5 Point Cafe in Seattle — which describes Google Glass as a “new fad for the fanny-pack wearing never removing your bluetooth headset wearing crowd” — has now aligned itself with Star Wars’s droid-hating Mos Eisley Cantina.

Google_Glass

It should be noted that the 5 Point Cafe’s banning of Google Glass is done somewhat out of respect for its patrons’ right to privacy, somewhat to be sardonic, somewhat to rabble-rouse and attract media attention — but mostly because the thing looks, well, dumb. Its faux-futuristic, Apple Store aesthetic doesn’t fit in with the cafe’s Seattle counterculture, hole-in-the-wall reputation. Their slogan, after all, is “Alcoholics serving alcoholics since 1929.”

The 5 Point Cafe’s disapproval of Google Glass also says a lot about the majority of Americans’ attitudes toward what they perceive as a gradual loss of privacy and individual freedoms due to technological intrusion. Smartphones are just as guilty of this as Google Glass, but the latter’s always-visible, always-on, always-pointed-at-you functionality crosses a line that makes many people uncomfortable. We just want to be left the hell alone. The idea of being secretly filmed — by any device, for any reason — makes us squirm, even though we’re knowingly caught on surveillance cameras dozens if not hundreds of times a day. We desire privacy and respect for what makes each of us unique, and when we don’t get it, we feel less-than-human. We feel as if we’re being treated like an animal.

Or worse, we feel as if we’re being treated like a cyborg, which is essentially a tool. And since tools don’t receive empathy or privacy, neither should a cyborg.

So maybe this is why the Mos Eisley Cantina’s barkeep gets all huffy when Luke tries to enter with his recently acquired droids. Although they appear to have emotions and personalities, C-3PO and R2-D2 are really cybernetic frauds, artificial charlatans trying vainly to pass themselves off as equals to other Cantina patrons. It’s an insult. To the surly proprietor, droids’ transparent mimicry of self-awareness and entitlement to certain rights sentient beings enjoy mocks the privilege of actually being a sentient being.

This repulsion toward androids and cyborgs can be described as the uncanny valley effect, first described by roboticist Masahiro Mori in 1970. Simply put, when we are confronted with a robot that resembles a human but doesn’t get human behavior quite right — their eyes might not blink like ours or their movements might appear too jerky or calculated — it creeps us out.

And so it is with Google Glass. When we eventually start seeing people on the streets wearing Google Glass, it will surely give some observers unease and skepticism.

They might ask: What are they doing with that thing? Am I being recorded or filmed? When I speak to them, are they tuning me out by listening to music, watching a movie or checking the weather forecast? Are they mentally correcting my factual errors using Wikipedia without my knowledge? Are they using face-recognition technology to scan and analyze me? Do they know all about me — my name, my Social Security number, my past, my secrets?

What part of their humanity and uniqueness did they have to give up to enjoy the benefits of Google Glass?

As admirable as Stop the Cyborgs and 5 Point Cafe’s efforts may be, there’s little hope that the cyborg-ification of humans will stop. No child wants to grow up to be a cyborg, yet humanity is increasingly becoming cybernetic. Many people cannot reasonably function without the use of hearing aids, artificial hips, mind-controlled prosthetic limbs or computerized speech generators. These devices are necessities, and no one faults their users for taking advantage of them. Google Glass is admittedly a different beast altogether, as it is an elective tool and could be used to violate non-wearers’ privacy.

But right or wrong, it’s only the beginning. From retinal implants that perform the same tasks as Google Glass and more, to telekinetic tattoos and nanobots, we’ll be so hard-wired with tech that, as futurists such as Kurzweil predict, the line separating man and machine will blur.

By then, will we even care about abstract liberties such as privacy and individuality?

It’s almost impossible to fathom now, but perhaps in the future we’ll look back and wonder why we cherished our individuality so much and resisted collectivism. After all, privacy as we now know it is a relatively modern phenomenon that we take for granted. Most of us wouldn’t be able to tolerate the constant physical togetherness and lack of solitude that defined a medieval European lifestyle. But since then we’ve readjusted our attitudes toward privacy and individuality, and chances are they will need to be readjusted again. Perhaps once most of us are wired to communicate telepathically and always be aware of each other’s locations and identities, we’ll find popular twentieth- and twenty-first-century depictions of cyborgs to be quaint, naïve and, yes, even a little offensive.

The Long Goodbye to the Solar System: Carl Sagan on Voyager 1 and the Golden Record

No one speaks more eloquently on the complexity, the poetry and the majesty of space than Dr. Carl Sagan. In this clip he discusses the far-reaching implications of Voyager and its mission, with special emphasis on the contents of its Golden Record.

Five billion years from now, humans will be no more. Except for our interstellar “message in a bottle,” what other relics will we leave behind for future beings that display the most positive characteristics of our species?

 

Voyager 1: Our Emissary to a “Community of Galactic Civilizations”

JimmyCarterQuote

Thirty-five years ago, Voyager 1 rocketed out of Earth’s gravitational pull on a mission to gather unprecedented data of our native cluster of planets, moons and asteroids, as well as the space they inhabit.

Today Voyager’s mission continues still — writing home, as it were, of what it sees and otherwise detects. Although some scientists have proclaimed that Voyager has now soared beyond the bubble that separates our system from, well, The Universe, it appears that the reports were made too hastily. However, there’s little doubt that sometime very soon, perhaps within this calendar year, Voyager 1 will become the first spacecraft, the first man-made anything, to escape our Solar System and enter what’s known as interstellar space, a celestial desert whose apparent nothingness veils further mysteries for us to solve.

To a more advanced species, Voyager no doubt resembles the equivalent of a rudimentary canoe that has successfully passed its first sandbar. But for now, entering interstellar space marks a monumental event in human history that has no rival.

Here’s hoping that, many years before we are even given the chance to join a “community of galactic civilizations,” we will have gained the courage and fortitude to solve the problems facing our planet and its inhabitants.

Eating Bugs: Earth’s Culinary Future

It’s inevitable.

Might as well get your taste buds ready now.

Short of providing every family with a Star Trek food-replicator, how else can we adequately feed a population that is likely to exceed nine billion by the year 2050?

Like it or not, bugs are coming to a dinner plate near you. And restaurant and grocery store and street food vendor.

Hundreds of cultures around the world already do as Simba must — have always done so, in fact. But here in Western society, insects and arachnids carry a certain stigma that will undoubtedly require a generation or two to eliminate or at least diminish before people even consider voluntarily placing one in their mouths. Hell, many struggle to summon the will to get close enough to a cockroach to stomp on it. Up until now, entomophagy — or bug-eating — has been associated with mental illness (in Bram Stoker’s Dracula, the delusional inmate Renfield feasts on flies and spiders), survivalists’ last resort before starving to death and sadistic American game shows.

Fear Factor will unlikely be credited with pushing forward the insect-eating agenda.

Fear Factor will unlikely be credited with pushing forward the insect-eating agenda.

And of course who can forget the nauseating dinner scene in Indiana Jones and the Temple of Doom?

Tastes like chi -- Nah, fuck it, tastes like a bug.

“Tastes like chi — Nah, fuck it, tastes like a bug.”

And then this guy:

eating_bug

To put it mildly, convincing Europeans and Americans that consuming pests is in their best interest will be no easy task. Advocates of insect-eating might as well try convincing them to eat glass. There’s just something intrinsically, inherently icky about these creatures that prevents most everyone from entertaining the thought of ingesting them — even to stay alive. It’s not a stretch to say that some people, if given the choice between eating only bugs or nothing, would sooner choose to die of malnourishment.

But that’s mainly due to the fact that we’ve been conditioned and programmed to believe that insects and arachnids are vile, disgusting creatures that root in dog shit — as if pigs don’t do the same thing. And yet many people are of the opinion that bacon is basically meat candy.

One of the problems is that usually whenever entomophagy is portrayed in films or on TV, the critters are uncooked and sometimes still alive — segmented little bodies writhing about, spindly legs twitching, antennae groping, wings thrumming, blank soulless eyes staring. No wonder the idea disgusts people.

On the contrary, prepared and cooked insects and arachnids, besides being nutritious and plentiful, showcase a cornucopia of flavors that people already enjoy. Scorpions taste like shrimp. Termites taste like carrots. Huhu grubs taste like peanut butter. Palm weevil larvae taste like meat candy, or bacon. There’s no reason to think that, were entomophagy to catch on here in the US, everyone would be slurping down live wriggling caterpillars like Simba.

(Coincidentally, caterpillars provide more iron and protein than an equivalent serving of minced meat. That’s pretty good, considering that they subsist on leaves and flowers, which humans don’t eat. Livestock in the US alone, on the other hand, annually gorge on enough grains to feed an estimated 840 million people, more than the entire population of Europe.)

Outside of the vegetarian/vegan crowd, there’s nothing culturally repugnant about eating chicken tenders, which don’t resemble the feathered animal at all. But would you eat a raw chicken? A live chicken? If this were the most widely-depicted way to eat poultry, as it is with bugs, chicken probably wouldn’t appear on too many menus, and the owners of Chick-fil-a would have to find some other means to finance their bigotry against the LGBT community.

What Sarah and Todd don't realize is that their bags are filled with live tarantulas.

What Sarah and Todd don’t realize is that their bags have been filled with live tarantulas.

Raw fish, by the way, is eaten by thousands everyday without their being repulsed by it — though many are and refuse to touch the stuff. Like bugs, sushi was until recently a weird taboo foreign “food” in Europe and America, not achieving mainstream status until at least the late 1980s. In the 1950s, in fact, the US Embassy in Japan advised visiting Americans to avoid eating uncooked fish — not because of any evidence that sushi was harmful necessarily but because the idea of raw fish was, to Westerners, culturally revolting and barbaric. Sushi was the foodstuff of sick, uneducated and desperate peasants who knew no better than to shovel unsanitary fish-flesh down their gullets.

How attitudes have changed. Sixty years later, no upscale suburb in the US would be complete without at least half a dozen sushi joints.

Half a century from now, will “insectarias” populate our commercial centers? After getting your nails done or hair trimmed, will you think nothing of popping into the nearby “bug bar” for a scrumptious bag of cricket-kabobs and mantis-snaps?

Before you answer “hell no,” consider that you’ve already eaten literally hundreds of bugs today without realizing it — maggots, ants, aphids, mites, fruit flies. Whether your food came canned, frozen, bagged, processed or directly off the vine, you most certainly have ground up untold insect particles between your molars and swallowed them down.

Perhaps it’s time we start acknowledging insects and arachnids for what they are: an abundant, renewable, inexpensive, nutritious and — when cooked — delicious food source.

Hakuna matata!

%d bloggers like this: