Category Archives: Technology

The twentieth century gave us the automobile, plane, radio, television, computer and Internet — all of which changed how we live. What emerging technologies this century will change our culture and society?

The One Thing “Back to the Future Part II” Got Right About 2015

When Back to the Future Part II was released in 1989, the year 2015 was a lifetime away. I saw the movie with a group of friends for my twelfth birthday party, and we were floored by the cool technology as it’s depicted in the future scenes. Flying cars! Hoverboards! Power-lacing Nikes! We all agreed: it would be a long wait indeed until 2015.

Here we are, midway through 2013, and so much of what Back to the Future Part II promised has yet to be realized. Like all twelve-year-olds who saw the movie, we were absolutely convinced that, of all the futuristic contraptions and devices, the hoverboard was the most likely to appear on store shelves. In fact, director Robert Zemeckis and his production staff successfully duped many moviegoers into believing that they had invented a real hoverboard specifically for the film.

But alas, no hoverboards. No flying cars. No power-lacing Nikes or self-drying jackets. No holographic movie trailers, food hydrators or retractable fruit buffets. Mercifully so, the movie’s prediction of early-21st-century fashion has remained fictional: no double-ties, no wearing our jeans pockets inside-out, no metallic sunglasses (or whatever the hell those things are that Doc Brown wears).

So what did the movie get right, if anything?

Watch the following clip to find out:

That’s right, Marty’ children are using devices that resemble Google Glass. So in the world of Back to the Future, Google Glass is invented ostensibly before the Internet. That’s not quite as dramatic as inventing aerosol deodorant before the wheel, as the 50-armed Jatravartids do in Hitchhiker’s Guide to the Galaxy, but it’s unbelievable (now) nonetheless.

Had Marty, Doc and Jennifer visited the real 2015 — just two years from now — how would their reactions be different? Marty calls the power-lacing Nikes “far out.” What would he have thought about iPhones, Xboxes and GPSes?


Breathe Easy: New Scientific Breakthrough Can Save You from Running out of Oxygen

As children we all played the game where we competed to see who could hold their breath longer than anyone else in the pool. Think back. How long could you hold it before the burning sensation compelled you to surface and gulp lungfuls of air? Thirty seconds? A minute? A minute and a half?

As impressive as those times are, you soon might be able to hold your breath for up to 30 minutes without any adverse effects.

Amazingly, scientists at the Boston Children’s Hospital have devised a microparticle that, when injected into the bloodstream, can super-oxygenate a person’s blood and allow them to live for up to 30 minutes without having to take a single breath.

Currently the person with the distinction of holding his breath the longest is German Tom Sietas, who in June 2012 remained underwater for a staggering 22 minutes and 22 seconds. With the new technology, you can hold your breath for a further eight minutes.

Such a scientific and medical breakthrough has countless applications, the most obvious of which is to save lives in emergency rooms and hospitals. Every household’s medicine cabinet might one day store a microparticle-dispenser of some kind next to the Tylenol and cough medicine that can be administered to a family member who happens to get a chicken bone lodged in his throat. Long-distance runners might use the technology to ensure that their blood receives enough oxygen. As a precaution, parents might give their children a shot of super-oxygen before dropping them off at the community pool or taking them out on the fishing boat. People who enjoy erotic asphyxiation — the act of deliberately restricting airflow for sexual pleasure — can have a treatment on hand in case things get too carried away. (Of course, someone else would need to be present to give them the dosage, since they would be unconscious.)

Imagine the military applications. Soldiers who never tire? Navy SEALs who need not surface for air until the most opportune time?

The DC character Aquaman, who can speak telepathically with sea creatures as well as breathe underwater, often gets mocked by readers and geeks for having the least useful and desirable superpowers among his fellow Justice League members.

As funny as the Family Guy clip is, no one would scoff at a person’s amazing ability to hold his breath for half an hour, thereby making him King of the Pool.

But like any new cutting-edge technology, it might take some time before these so-called microparticles are available for general consumption. So, you know, don’t hold your breath.


Imminent Immortality: Do You Really Want to Live Forever?

For as long as humans have wandered the earth, our mortality has been front and center in our long list of woes. In every culture, in every age, many people have attempted to cheat death, one of the most famous examples of which includes Qin Shi Huang, king of the Chinese state of Qin in the third century BCE. Obsessed with living forever, he ordered his alchemists and physicians to concoct an elixir of life. They obliged and presented him with what they believed might grant him eternal life. Unfortunately for Qin Shi Huang, what they gave him was a handful of mercury pills, and he died upon consuming them.


Maybe they were just tired of looking at his douchey headwear and debilitatingly huge shoes.

We’ve come a long way since Qin’s day, so much so that immortality — or at least unprecedented longevity — appears increasingly plausible sometime this century. Inventor and futurist Ray Kurzweil seems so sure of it that he allegedly takes upwards of 200 dietary supplements a day to forge a “bridge to a bridge” when long life is the norm. The May 2013 issue of National Geographic, in fact, features this very topic.


For now, however, they say we die twice: once when we take our last breath, and again when our name is uttered for the last time.

Our greatest literature, both ancient and modern, seems to confirm this attitude. Countless examples suggest that as much as we strive to achieve everlasting life, death is our inescapable fate. To seek a loophole is folly and smacks of the worst kind of hubris. The earliest such tale, over twelve thousand years old, relates the ancient Mesopotamian king Gilgamesh’s quest for everlasting life following the death of his friend Enkidu. Although Gilgamesh ultimately fails in his undertaking, he achieves a sort of immortality in the minds of his people as a result of his heroic exploits. The same arrogance is seen in the character of Greek demigod Achilles, who was said to be impervious to harm in all parts of his body except his ankle, which his mother Thetis failed to immerse in the river Styx. Near the end of the Trojan War, he is slain by the lethal accuracy of Paris’s arrow, but Achilles’s courageous feats guarantee that his name lives on into perpetuity.

He wasn't known for his modesty.

One thing he wasn’t known for was his modesty.

For those of us who lack the godlike strength and derring-do of Gilgamesh, Achilles, Heracles and other ancient and Classical heroes, the only hope we have at gaining immortality is through emerging age-reversing technology and research into the human brain. Our two leading options appear to be an indefinite halt to the aging process or a sort of digital resurrection — uploading our minds into vast computer servers. But are either of these options desirable?

The former option, the perpetuation of our corporal bodies, seems at this point to be more scientifically plausible but far less satisfactory. Many stories warn of the dangers of unnaturally extending the shelf-life of our flesh and bones. The legend of the Wandering Jew, for instance, convinces us that everlasting life is a curse, a waking nightmare that results only in unfathomable despair and desperation. According to the legend, the old man scours the world seeking someone who will exchange his mortality for his cursed immortality. For two centuries now, Mary Shelley’s gothic novel Frankenstein; or, the Modern Prometheus has terrified readers with the personal, societal and religious implications of reanimating dead tissue. Alphaville’s 1980s anthem of youth “Forever Young” rejects the notion of immortality for its own sake:

It’s so hard to get old without a cause
I don’t want to perish like a fading horse
Youth’s like diamonds in the sun
And diamonds are forever

Forever young, I want to be forever young
Do you really want to live forever, forever and ever?

What’s the use of everlasting life, Alphaville argues, if we can’t maintain a youthful spirit? Better to die with a hopeful eye on the future than to trudge meaninglessly though eternity.

Immortality without fabulous hair and colorful jumpsuits? No deal!

Immortality without fabulous hair, eye shadow and colorful jumpsuits? No deal!

Poets routinely insist that the only fulfilling way for us to achieve immortality is through our art and innovations. In Shakespeare’s “Sonnet 18,” the speaker promises a youth or possible lover that “thy eternal summer shall not fade, / … Nor shall death brag thou wander’st in his shade.” Because he has composed the sonnet in her honor, her memory will last for as long as the poem exists: “So long as men can breathe, or eyes can see, / So long lives this, and this gives life to thee.”

Of course, there are just as many counterarguments to the idea that art leads to eternal life. Romantic poet Percy Shelley’s poem “Ozymandias” tells of a wanderer who comes across a “lifeless,” eroded statue in the desert, whose pedestal reads:

My name is Ozymandias, King of Kings:
Look on my works, ye mighty, and despair!

Despite the once-grandness of the statue, “Nothing beside remains. Round the decay / Of that colossal Wreck, boundless and bare / The lone and level sands stretch far away.” Even this mysterious king’s exploits and fame – whatever they might have been – couldn’t save his memory from the ravishes of time. Not only has he died the first time but, as evidenced by the wasteland of his forgotten realm, the second time as well. American filmmaker Woody Allen echoes this sentiment: “I don’t want to achieve immortality through my work. I want to achieve it through not dying.”

But the question remains — is not dying desirable?

If most of us one day have the opportunity to extend our lives indefinitely, how will that change the dynamics of society and culture? A typical person living to 80 years of age goes through several dramatic changes in his lifetime: his opinions and attitudes change, his interests, his friends, his career, sometimes even how he remembers the past. Imagine how much change would take place in a thousand years of life! You wouldn’t be a shadow of the person you once were. Some workers put in 30 or 40 years’ worth of service at a single company or organization, or work in a single industry for as many years, but how dull it would be to continue beyond that. We celebrate when couples reach fifty years of marriage, but could any of them reach 100 years? Two hundred? A thousand? A little over half of marriages end in divorce already. Would couples, knowing that they are going to live for hundreds of years, wed with the firm understanding that they will eventually split? How would immortality affect patriotism?

Let’s pretend for a moment that the Wandering Jew really exists. For close to two thousand years, he has shuffled down countless roads, cane in hand, trying to find some fool to take his place. He clearly cannot be the same person now as he was during the time of the Romans. He’s seen far too much and met far too many people to hold on to whatever prejudices he once had. What “science” he might have believed as a young man has since been obliterated. The language he spoke for centuries, Aramaic, will soon die out. His ancient brand of Jewish is no longer. He claims no country as his own. Having lived to be two thousand years old, he has seen the rise and fall of dozens of nations and empires. He has come to realize the arbitrariness and fragility of borders as well as tribal and national pride.

Leaving aside the unpleasantness of experiencing eternity as a decrepit old man and being charged with the impossible task of giving away your decripitude, what is it about immortality that attracts people so? As Caesar declares in Shakespeare’s play:

Of all the wonders that I yet have heard,
It seems to me most strange that men should fear,
Seeing that death, a necessary end,
Will come when it will come.

Digital Rapture

The second option to immortality involves uploading our minds onto computer servers, a solution advocated by thinkers such as Kurzweil and Dmitry Itskov. Doing so would immediately eliminate many of the problems outlined above. You need not age in a digital landscape, for one thing. And since you’re whole existence amounts to lines of computer code, you could conceivably “program” yourself to avoid feeling depression, sadness, doubt and other negative emotions.

But there are other problems in this scenario.

If we upload our minds onto computers, we can “live” for as long as we wish, or as long as the data remains properly archived and resistant to fragmentation, viruses and hacking. After all, the official Space Jam website hasn’t aged a day since it launched back in 1996. But even if every last facet of our memories, temperament, interests, dislikes and habits carry over into the merry old land of ones and zeros,  are the digital copies really “us” — the essential us — or simply clever simulations? What’s lost, if anything, in the transfer from a carbon-based world to a silicon world? Perhaps the earliest available opportunities to experience immortality will be faulty and disastrous, resulting in regretfully botched versions of our psyches.

Something's not... quite... right.

Something’s not… quite… right.

Let’s say you upload your mind today. Now there are two “yous,” the analog you and the digital you. After your analog self dies, your digital self “lives” on. It will no doubt continue to assert that it is just as “real” as you ever were because it has the same memories, the same personality, the same tics and religious beliefs and tastes in women (or men, or both). Otherwise, how can it claim to be you? One of the problems here, if indeed there is one, is that you — the meat sack version — won’t survive to enjoy the immortality you’ve passed on to this immaterial copy of yourself.

Is “good enough” simply not good enough?

We place such a high premium on authenticity. Even if the digital copy of yourself is identical in every possible way, it’s still not the “you” that emerged from your mother’s womb. The same argument can be made with regard to art forgeries, some of the best of which are sold at auction as the real deal. Shaun Greenhalgh, possibly history’s most successful art forger, was so good, he managed to dupe both casual and expert art enthusiasts for years and make close to a million pounds before being caught. Anyone who has one of his remarkably convincing pieces sitting in their house — one of his Rodin knockoffs, for instance — is reasonably entitled to tell visitors that they do indeed have a Rodin. There’s nothing about the piece that gives away its deception, other than the abstract notion of its inauthentic origin. But for most people, that’s enough. No matter what the piece looks like, either Rodin sculpted it with his own hands or he didn’t. Similarly, no matter how convincingly “real” a digital life might be, there are those who would refuse such a life because it lacks the nebulous idea of authenticity.

Of course, like Greenhalgh’s Rodin piece, and as we’ve already discussed, there’s no certifiable way to disprove that what you think is reality is actually a fraud. How do you know you’re not already living in a sophisticated computer simulation right now?

Gilgamesh and Qin Shi Huang’s quest for everlasting life might come to a close sometime this century. Before that happens, however, we must discuss the implications and consequences of a world in which death is no longer certain. Emily Dickinson, abandoning the desire to live forever, muses: “That it will never come again is what makes life so sweet.” Since immortality will surely become a reality, we must reassess the sweetness in life.

Cyborgs in the Workplace: Why We Will Need New Labor Laws

“We are an equal opportunity employer and do not discriminate against otherwise qualified applicants on the basis of race, color, religion, national origin, age, sex, veteran status, disability, cybernetic augmentation or lack thereof, or any other basis prohibited by federal, state or local law.”

Most of us are accustomed to seeing this equal opportunity clause when we’re filling out job applications — so much so, in fact, that our eyes tend to skim right over it. Chances are, you’ve seen it so often that you completely ignored the first paragraph. But if you go back and read it carefully, you can see what the equal opportunity clause might someday look like.

Yes, you read it right. Get ready to work alongside cyborgs at the office, the shop and the warehouse. Get ready to send your kids off to be taught and babysat by cyborgs. Get ready to engage in water cooler banter with cyborgs, collaborate with cyborgs, attend power meetings with cyborgs and carpool with cyborgs. Get ready to watch laughably sterile corporate videos at your workplace on how to prevent cyborg-discrimination and what to do if you suspect that it’s occurring.

Because inevitably the next major labor rights movement — here in the US and elsewhere around the world — will involve cyborgs in the workplace. To protect them from being denied employment as a result of their modifications, new anti-discrimination laws will need to be passed. Cybernetic implants such as what cyborg-activist Neil Harbisson wears on a regular basis are out of the ordinary, draw attention to their wearers and therefore might alarm potential employers.

Neil Harbisson's eyeborg grants him synesthetic abilities such as "hearing" colors as well as seeing colors that baseline humans cannot perceive. His Cyborg Foundation helps promote and defend cyborg rights.

Neil Harbisson’s eyeborg grants him synesthetic abilities such as “hearing” colors as well as seeing colors that baseline humans cannot perceive. His Cyborg Foundation helps promote and defend cyborg rights.

Employers might worry, understandably so, that the technology will be used for ulterior purposes other than what the wearer alleges it’s for, lead to workplace rivalries and disputes, create distraction or drive away clients. Let’s be honest here. Not many employers would be too keen on having someone who wears as much hardware as real-life cyborg Steve Mann does work the cash register.

"Hi, I'm here to apply for the school counselor position."

“Hi, I’m here to apply for the school counselor position.”

Steve Mann, an inventor and professor at the University of Toronto, is the perfect example of why such laws will be necessary. In July 2012, Mann was physically assaulted in a Paris McDonald’s by one of its employees presumably because the assailant didn’t appreciate his odd appearance. Cyber-hate crimes such as this will surely become more common in the workplace and elsewhere.

Baseline humans who choose to remain cyber-free, or who can’t afford the technology, will also need to be protected, for the opposite reasons. Because they lack whatever skills or enhancements cybernetic humans are granted through wearable or surgically-embedded technology, employers might hesitate to hire them for or promote them to important positions. Let’s say you manage a group of market research analysts. Who would you be more tempted to bring onto your team: a brilliant baseline Harvard graduate? Or a cyborg who has undergone a procedure that boosts his brain’s calculating power to supercomputer levels?

To establish workable, enforceable anti-cyborg-discrimination laws and policies, many questions will first need to be answered.

The most obvious question: what is a cyborg exactly? Generally speaking, a cyborg is a human who has been modified or augmented with some sort of computer, robotic or cybernetic technology. Using this definition, a cyborg is not built from scratch in a manufacturing plant, factory or lab as a robot might, but instead conceived through the union of a human egg and sperm cell. Androids, which are nothing more than sophisticated humanoid robots, probably will not be protected under any sort of anti-discrimination laws — at least not until they are sophisticated enough to demonstrate human-like emotions and self-awareness. Because of advances in artificial intelligence, robotics and the reverse-engineering of the human brain, this looks more and more feasible.

Robot flowchart

Even so — assuming that we can one day manufacture an android to resemble a human in every conceivable way, to say nothing of why we would ever have the need or desire to create such a being — it’s unclear whether the law would differentiate between a cyborg and android where labor rights and discrimination in the workplace are concerned. If a corporation can gain personhood status and enjoy certain legal rights and protections, why can’t an android? Would it be cruel and unlawful to make an android work around the clock, even if it showed no signs of fatigue?

MR. POTATO HEAD: Woo-hoo! Five o'clock! So am I free to go? ENGINEER: No. MR. POTATO HEAD: But Mrs. Potato Head and I -- ENGINEER: I said NO!

MR. POTATO HEAD: Woo-hoo! Five o’clock! It’s Miller Time!
ENGINEER: Not tonight it ain’t.
MR. POTATO HEAD: But Mrs. Potato Head and I —

When does a human become a cyborg? Where’s the line? Are people with pacemakers, hearing aids and electro-hydraulic prosthetic limbs cyborgs?

Right now, owning and using “distracting” wearable computing such as Google Glass isn’t protected by the law because doing so is a lifestyle choice, sort of like having excessive tattoos, which also precludes enthusiasts from certain occupations (though these attitudes are quickly changing). But over the coming years, cybernetic implants and augmentation will increasingly become ubiquitous, available in all flavors and degrees of performance. The more these technologies are accepted and used by a majority of people, for a great number of everyday tasks, the less they will seem like a choice. Instead, they will be viewed as essential tools to maintaining a “normal,” productive life, the same as an automobile, computer or phone. Even though it’s possible, most of us cannot do without a phone of some kind — smartphone or otherwise — and for this reason, the only choice in the matter is what brand of phone to buy and service provider to contract with.

And yet, in 1876, a Western Union internal memo scoffed at the idea that people will have need for them: “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us.”

Or consider this 1943 comment made by Thomas Watson, then-chairman of IBM, who doubted the pervasive need for computers: “I think there is a world market for maybe five computers.”

Or this one by Digital Equipment Corp. founder Ken Olson, as recently as 1977: “There is no reason anyone would want a computer in their home.”

In 1899, the great Irish physicist and engineer William Thompson, Lord Kelvin — who developed the precise value of absolute zero, among other scientific contributions — strung together a staggering list of boneheadedly inaccurate predictions: “Radio has no future. Heavier-than-air flying machines are impossible. X-rays will prove to be a hoax.”

Wrong. Wrong. Wrong.

"Shut up."

“Shut your potato hole, why don’t ye, ‘fore I stick me boot up your arse!”

And so it will be with cybernetic implants and augmentations. Most people now doubt that such things will possibly become mainstream, but as we’ve seen again and again, exciting new technologies tend to fill lifestyle gaps we never knew existed.

Workers in the US are protected in a number of ways. But if employers are required not to discriminate against those with a certain religious preference, which is very much a lifestyle choice, unlike age, sex and race, then perhaps cyborgs will one day have their rights addressed as well.


I believe that being a cyborg is a feeling, it’s when you feel that a cybernetic device is no longer an external element but a part of your organism. ~Neil Harbisson

Obama Wants Your Brain: Reverse-Engineering the Human Mind

We know why rain falls from the sky and how distant stars are born. We know the exact height of our planet’s tallest peak and the depth of its deepest ocean. We know that all the world’s landmasses split asunder eons ago from one super-continent and that human beings share a common ancestor with apes. We know why whooping cranes migrate, why salmon swim upstream, and why bats hang upside down. We know that the planet Mercury’s core accounts for about 42 percent of its volume and that the surface temperature of Neptune’s moon Triton plunges to as low as -234 degrees Celsius. We know how to split the atom and unleash unimaginable carnage.

Taking into account all the discoveries we’ve made over the past 2,000 years, it’s amazing that what we know least about is, well, us — specifically, the human brain or, as President Barack Obama describes it, the “three pounds of matter that sits between our ears.”

But that will soon change. (And by “soon,” we mean sometime within the next decade.) The president recently unveiled details of an ambitious new plan to map the human brain. According to the White House’s website, this $100 million undertaking might lead to much-needed benefits such as better treatments or even cures for neurological and emotional disorders, including Parkinson’s, PTSD, traumatic brain injury and bipolar disorder. Although much further in the future, the research might also lead to some sort of advanced human-computer language interface.

The official title for the project is — take a deep breath — the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. Its ultimate goal is to “produce a revolutionary new dynamic picture of the brain that, for the first time, shows how individual cells and complex neural circuits interact in both time and space.” Furthermore, it aims to determine how exactly “the brain enables the human body to record, process, utilize, store, and retrieve vast quantities of information, all at the speed of thought.”

This is exciting news indeed, as the NIH Brain Initiative could very well end up being Obama’s Apollo 11 moon landing or Human Genome Project — to name only two similarly bold, landmark scientific and exploratory projects pushed by Presidents Kennedy and Clinton. Basically what we’re talking about here is reverse-engineering the human brain. By devising a map that explains how the 100 billion neurons in our brains connect, behave and operate, we’ll finally begin to approach an understanding of us that rivals the extent of what we know about the carbon cycle, the mating habits of the great white shark and the composition of Martian soil.

Neural Connections In the Human Brain

Neural connections in the human brain.

Besides practical applications, the NIH Brain Initiative will hopefully give us answers to questions, both profound and trivial, that have stumped even the greatest minds. For instance:

Why do we blush when we feel embarrassed or ashamed? What’s the evolutionary purpose of laughing and expressing humor? Why are yawns contagious? Why do we dream, and why are they sometimes so vivid and lucid as to seem as real as “reality”? Why did every primitive culture develop the idea of divine beings, and why do so many millions of people continue to subscribe to the cults built around them? What is consciousness exactly, and why must it be tied to one single person at all times? How can our brains be so goddamn complex — the best of which are able to devise new poetic forms and musical genres, theorize the existence of dark matter and sketch an accurately-detailed mural of the New York City skyline from memory — yet they are so clunky and inefficient that we often have difficulty recalling where we left our car keys or what we just read?


Practical results of this years-long study will not come overnight. Hopefully the BRAIN Initiative’s efforts will lead to new treatments and cures of neurological and neurodegenerative disorders and help us become happier, healthier beings. Besides that, who knows what else we might find buried deep in the coffers of the three pounds of matter that sits between our ears? Just as the Human Genome Project has led to advancements in molecular medicine, DNA forensics and bioarchaeology, the NIH’s research will likely have major neurological and societal implications that will change the face of humanity forever.

How I Learned to Stop Worrying and Love the Cyborg

When we hear the word “cyborg,” we think of an emotionless being that has completely lost or was never granted its individuality or right to privacy. We think of the worst kind of collectivist entrapment, a state of perpetual mindlessness that seeks only to follow directives passed down from some higher authority. We think of the Terminator, Robocop and Star Trek’s Seven of Nine.


That being said, the negative attitude we harbor toward the idea of cyborgs has led to a massive backlash against Google Glass, which many people feel is an assault on privacy and individuality. An advocacy group, Stop the Cyborgs, is in fact campaigning to limit the use of intrusive devices such as Google Glass with the intent to “stop a future in which privacy is impossible and central control total.” Likewise, some businesses have already banned it from being worn on their premises. The first such establishment, the 5 Point Cafe in Seattle — which describes Google Glass as a “new fad for the fanny-pack wearing never removing your bluetooth headset wearing crowd” — has now aligned itself with Star Wars’s droid-hating Mos Eisley Cantina.


It should be noted that the 5 Point Cafe’s banning of Google Glass is done somewhat out of respect for its patrons’ right to privacy, somewhat to be sardonic, somewhat to rabble-rouse and attract media attention — but mostly because the thing looks, well, dumb. Its faux-futuristic, Apple Store aesthetic doesn’t fit in with the cafe’s Seattle counterculture, hole-in-the-wall reputation. Their slogan, after all, is “Alcoholics serving alcoholics since 1929.”

The 5 Point Cafe’s disapproval of Google Glass also says a lot about the majority of Americans’ attitudes toward what they perceive as a gradual loss of privacy and individual freedoms due to technological intrusion. Smartphones are just as guilty of this as Google Glass, but the latter’s always-visible, always-on, always-pointed-at-you functionality crosses a line that makes many people uncomfortable. We just want to be left the hell alone. The idea of being secretly filmed — by any device, for any reason — makes us squirm, even though we’re knowingly caught on surveillance cameras dozens if not hundreds of times a day. We desire privacy and respect for what makes each of us unique, and when we don’t get it, we feel less-than-human. We feel as if we’re being treated like an animal.

Or worse, we feel as if we’re being treated like a cyborg, which is essentially a tool. And since tools don’t receive empathy or privacy, neither should a cyborg.

So maybe this is why the Mos Eisley Cantina’s barkeep gets all huffy when Luke tries to enter with his recently acquired droids. Although they appear to have emotions and personalities, C-3PO and R2-D2 are really cybernetic frauds, artificial charlatans trying vainly to pass themselves off as equals to other Cantina patrons. It’s an insult. To the surly proprietor, droids’ transparent mimicry of self-awareness and entitlement to certain rights sentient beings enjoy mocks the privilege of actually being a sentient being.

This repulsion toward androids and cyborgs can be described as the uncanny valley effect, first described by roboticist Masahiro Mori in 1970. Simply put, when we are confronted with a robot that resembles a human but doesn’t get human behavior quite right — their eyes might not blink like ours or their movements might appear too jerky or calculated — it creeps us out.

And so it is with Google Glass. When we eventually start seeing people on the streets wearing Google Glass, it will surely give some observers unease and skepticism.

They might ask: What are they doing with that thing? Am I being recorded or filmed? When I speak to them, are they tuning me out by listening to music, watching a movie or checking the weather forecast? Are they mentally correcting my factual errors using Wikipedia without my knowledge? Are they using face-recognition technology to scan and analyze me? Do they know all about me — my name, my Social Security number, my past, my secrets?

What part of their humanity and uniqueness did they have to give up to enjoy the benefits of Google Glass?

As admirable as Stop the Cyborgs and 5 Point Cafe’s efforts may be, there’s little hope that the cyborg-ification of humans will stop. No child wants to grow up to be a cyborg, yet humanity is increasingly becoming cybernetic. Many people cannot reasonably function without the use of hearing aids, artificial hips, mind-controlled prosthetic limbs or computerized speech generators. These devices are necessities, and no one faults their users for taking advantage of them. Google Glass is admittedly a different beast altogether, as it is an elective tool and could be used to violate non-wearers’ privacy.

But right or wrong, it’s only the beginning. From retinal implants that perform the same tasks as Google Glass and more, to telekinetic tattoos and nanobots, we’ll be so hard-wired with tech that, as futurists such as Kurzweil predict, the line separating man and machine will blur.

By then, will we even care about abstract liberties such as privacy and individuality?

It’s almost impossible to fathom now, but perhaps in the future we’ll look back and wonder why we cherished our individuality so much and resisted collectivism. After all, privacy as we now know it is a relatively modern phenomenon that we take for granted. Most of us wouldn’t be able to tolerate the constant physical togetherness and lack of solitude that defined a medieval European lifestyle. But since then we’ve readjusted our attitudes toward privacy and individuality, and chances are they will need to be readjusted again. Perhaps once most of us are wired to communicate telepathically and always be aware of each other’s locations and identities, we’ll find popular twentieth- and twenty-first-century depictions of cyborgs to be quaint, naïve and, yes, even a little offensive.

The Long Goodbye to the Solar System: Carl Sagan on Voyager 1 and the Golden Record

No one speaks more eloquently on the complexity, the poetry and the majesty of space than Dr. Carl Sagan. In this clip he discusses the far-reaching implications of Voyager and its mission, with special emphasis on the contents of its Golden Record.

Five billion years from now, humans will be no more. Except for our interstellar “message in a bottle,” what other relics will we leave behind for future beings that display the most positive characteristics of our species?


Voyager 1: Our Emissary to a “Community of Galactic Civilizations”


Thirty-five years ago, Voyager 1 rocketed out of Earth’s gravitational pull on a mission to gather unprecedented data of our native cluster of planets, moons and asteroids, as well as the space they inhabit.

Today Voyager’s mission continues still — writing home, as it were, of what it sees and otherwise detects. Although some scientists have proclaimed that Voyager has now soared beyond the bubble that separates our system from, well, The Universe, it appears that the reports were made too hastily. However, there’s little doubt that sometime very soon, perhaps within this calendar year, Voyager 1 will become the first spacecraft, the first man-made anything, to escape our Solar System and enter what’s known as interstellar space, a celestial desert whose apparent nothingness veils further mysteries for us to solve.

To a more advanced species, Voyager no doubt resembles the equivalent of a rudimentary canoe that has successfully passed its first sandbar. But for now, entering interstellar space marks a monumental event in human history that has no rival.

Here’s hoping that, many years before we are even given the chance to join a “community of galactic civilizations,” we will have gained the courage and fortitude to solve the problems facing our planet and its inhabitants.

A Real-Life Matrix by 2045?

If you grew up in the 80s, you might remember a TV show called Tales from the Darkside. It was little more than a poor man’s Twilight Zone, but occasionally an episode aired that surprised and shocked you.

One such episode was titled “Mookie and Pookie.” I know — terrible names, but it gets better. The episode features two teenage twins, one of whom, Mookie, is dying from a terminal illness. In his few remaining days, he frantically works to complete the instructions for a sophisticated computer program that he makes Pookie promise she will carry out after his death. Once he dies, Pookie keeps her promise and obsessively follows her deceased brother’s instructions, buying exotic computer parts, assembling them, writing code. She does this despite not having a clue what might be the result and despite her parents’ insistence that she’s wasting her time and money on a project conceived of out of desperation. So the day arrives when she finishes the final step, and after she eagerly boots up the mystery machine, she hears the voice of — presto! — her late brother Mookie. He had risen from the dead! Sort of. Buried somewhere in the ones and zeros and computer circuitry is his consciousness, as present and aware as any healthy teenager — sans physical body.

The episode ends not with newly-digitized Mookie taking over the world’s electric and information infrastructure, but on a warm note with the entire family, computer-boy included, playing a round of Scrabble.

“What if I told you that living in the Matrix is actually as dull as spending a lazy Sunday afternoon with the fam?”

As hokey as Mookie and Pookie’s story is, cybernetic immortality might very well become a reality. Dmitry Itskov, a Russian businessman and founder of Initiative 2045, is currently seeking investors to fund research that will lead to eternal life — with a catch. The catch, of course, is that your body does not persist indefinitely; instead, your consciousness — what makes you you — lives on in a cybernetic Matrix-like environment.

But what’s a body other than a sack of meat to encase one’s consciousness?

That’s the official stance, at least, of Initiative 2045, whose main scientific goal is to “create technologies enabling the transfer of an individual’s personality to a more advanced non-biological carrier, and extending life, including to the point of immortality.”

To repeat: a “more advanced non-biological carrier.” The explicit assumption is that what millions of years of biological evolution have granted us is vastly, unfathomably inferior to what a few short decades of computer research can achieve. Which is an amazing testament to human intelligence and ingenuity.

Initiative 2045 sees immortality as entirely plausible, a scientific problem that requires a gradual series of intermediary “trans-humanistic transformations,” starting with the replacement of body parts — limbs as well as organs — with non-biological, cybernetic components… and ultimately ending with the replacement of our meat sacks with ones and zeros.

Your future family portrait?

Your future family portrait?

This step-by-step process is analogous to futurist Ray Kurzweil’s concept of the “bridge to bridge” path to immortality, which is why he allegedly takes between 180 and 210 vitamin and mineral supplements a day: to sustain his carbon-based body long enough to see the day when he no longer needs his carbon-based body. Such a radical change in human existence — when shuffling off our mortal coils results not in our deaths but our cybernetic rebirths — unquestionably qualifies as a Singularity event.

As exciting as this all sounds, what remains to be answered by Dmitry Itskov and others is the existential nature of a life lived in cyberspace. What will people “do” with their time — infinite time for that matter? Will we fall in love, have families, go to work, play Scrabble? Will it be necessary to emulate a “normal” life, complete with the laws of physics and the need to eat and sleep? All we “know” is what we’ve seen in sci-fi classics such as William Gibson’s groundbreaking cyberpunk novel Neuromancer and the films Tron and The Matrix. But of course sci-fi tends to exaggerate the implications of speculative technology. Maybe cyberspace will end up as ho-hum as normal space often is.

Or maybe we’re already living in a computer simulation, as many have earnestly theorized. How would we know? After all, what we think of as “reality” is nothing more than a sophisticated construct our minds have created based on sensory data. Colors, sounds, flavors, pain, euphoria — these are all interpretations of the world beyond our senses. At the rate computer science is accelerating, it’s perfectly plausible to imagine an advanced human culture with the capability and means to replicate the experience of, well, life.

Consequently, if we are indeed living in a future culture’s simulation and, while in that simulation, devise a way to upload our consciousnesses in a separate cyberspace, there’s no end to the levels of Inception-like simulations we’re simultaneously experiencing.

Let’s just hope that at least one of them is more interesting than an afternoon playing Scrabble with our folks.

Planet Open Source: Will 3D Printers Bring an End to the World Economy?

Man is born free, and everywhere he is in chains. ~Jean-Jacques Rousseau (The Social Contract)

*  *  *

From the first day to this, sheer greed was the driving spirit of civilization. ~Friedrich Engels

*  *  *

Look, a guy who builds a nice chair doesn’t owe money to everyone who ever has built a chair, okay? ~Mark Zuckerberg (played by Jesse Eisenberg) in The Social Network (2010)

*  *  *

You see, money doesn’t exist in the twenty-fourth century. The acquisition of wealth is no longer the driving force in our lives. We work to better ourselves and the rest of humanity. ~Capt. Jean-Luc Picard, explaining to a twenty-first-century woman how the “economics of the future” differ from hers


It’s often been said that humans are the only species who pay to live on Earth. Before we emerge from our mothers’ wombs — indeed, even while we patiently gestate inside our mothers’ uteri — we have already assured our parents an often insurmountable heap of expenses and debt for which they are responsible: food and clothes and toys and books and medical attention and hobbies and extracurricular activities and cars and higher education. By the time she turns 17, a child born in 2011 will have cost an average middle-class family $234,900. For no other reason than she came flailing and screaming into the world.

Will our penchant for commodifying every last scrap of our existence still remain strong in the year 2099? For how many more decades will humanity tolerate being enslaved by an imaginary, man-made monetary system that favors the very few, just as feudalism did centuries ago?

European feudalism, of course, lasted only 700 or 800 years before gradually giving way to what we now call capitalism — a term popularized by socialist Karl Marx, of all people. And like feudalism, modern capitalism has its roots in human bondage. Its success as the prevailing social system in the Western world would have been very difficult indeed had it not been for the lucrative human trafficking business. In fact, large American banking corporations such as the Warren Buffet-run Berkshire Hathaway, the now-defunct Lehman Brothers, JPMorgan Chase and Wachovia all came to prominence as a result of their involvement, one way or another, in the African slave trade.

Nearly 150 years after the abolishment of slavery, the business model of commodifying human life still persists. We often talk about how much this celebrity or that politician is worth, as if monetary wealth is a person’s ultimate defining characteristic. We put a price on basic human needs such as food, shelter and health care — a price that’s too frequently beyond the means of many families. Around the world, human trafficking remains a thriving industry. We’ve even gone so far as to grant legal personhood to corporations.

Under the capitalistic model, people are commodities, and commodities are people.



But like feudalism, capitalism will one day buckle under the weight of its many inherent shortcomings. A system that sets arbitrary value on goods, services and people is doomed to fail.

The question is: What will take its place? If capital and the drive to acquire wealth no longer exist in Capt. Picard’s twenty-fourth century, how are goods and services exchanged? What motivates people to go to work and be productive members of society when imaginary Monopoly money is no longer the reward?

As always, the Future Culturalist is mum on details regarding the nature of the economy in the year 2099.


Linux and Wikipedia

For solutions on how to move past a capitalistic social system, we might look at the thriving world of open source software. Millions upon millions of people solely use the free operating system Linux as an alternative to the pricey and, many would argue, inferior Microsoft Windows. People don’t necessarily use Linux because it’s free; they use it because the source code is open to the public, allowing for greater creativity, innovation and collaboration than you can find on the corporate-owned Windows.

Another good example is Wikipedia, the free online encyclopedia. There are over 4,100,000 articles in the English-language version alone, all of them maintained and contributed to by “ordinary” users. In the past, Wikipedia has been criticized for allowing baseless or false information to appear on its site, but vigilant contributors tend to correct the work of Wikipedia “vandals” pretty quickly.

The University of Kentucky: you'll never find a more wretched hive of scum and villainy.

The University of Kentucky: you’ll never find a more wretched hive of scum and villainy.

That collaborators will never receive any monetary compensation or royalties doesn’t stop them from modifying and improving Linux and Wikipedia. They work, as Capt. Picard says, to better themselves as well as humanity. It’s the take-a-penny-leave-a-penny model of innovation that attracts such people.

Some work to improve open source software. Some simply use it. Those who try to abuse it — and many do — are outed and ultimately disbarred from participating. Everyone benefits.

Andre Charland, an Internet software developer, had this to say after open-sourcing one of his company’s products: “You can’t do it soon enough. You’ll be blown away by how much better your code gets and how much more quickly you can reach a broader audience.”

Case in point: those who use Linux know how much more efficiently it runs than Windows. And because of the success of Wikipedia, when did you last use a paper-bound encyclopedia?

Should be "Wikipedia Brown' now.

Should be “Wikipedia Brown’ now.

How then can we emulate this model in our general social system? What would an open source economy look like? How would it function?


3D Printing

In Star Trek, people of the future routinely enjoy the convenience of replicators, which are machines that can synthesize pretty much anything you want them to by rearranging subatomic particles into food, water, clothes, toys, spare parts and much more.

How about paper money? A pile of gold Krugerrands? A $1 billion check issued by the IRS?

This is forgery, of course, but no doubt anyone with a replicator would use it as his or her own ATM machine. After all, entertainment and software companies have tried cracking down on pirating and illegal file-sharing, but they still lose billions annually, with the amount likely to rise. The problem will only increase with the availability of 3D printers, which are simply precursors to replicators.

Will this...

Will this…

...eventually become this?

…eventually become this?

It’s apparent that, were everyday folk permitted to own powerful Star Trek-caliber replicators, it would spell the end of the economy as we know it — or at least of physical currency of any kind. Perhaps this is why the acquisition of wealth is no longer important in Picard’s time. With wealth available to all as plentifully as oxygen, it loses its uniqueness and desirability (and “wealth” here means anything of value, not just currency). Consequently, there would be no reason to work in exchange for wealth.

Think of all the companies and businesses that would instantaneously be rendered obsolete. If you scroll through the Fortune 500 for 2012 — a list of the U.S.’s most profitable companies — you’ll find it populated by corporations whose goods and services many of us can’t easily do without: oil, food, banking, communications, automobiles, retailing. In the top ten alone, we find four oil companies, two automakers, a tech firm, a massive holding company, a mortgage lender and the world’s largest retailer. The #1 company, ExxonMobil, is worth close to half a trillion. The combined revenue of these ten companies amounts to over $2 trillion — greater than the GDP of many small countries and about an eighth of the U.S.’s debt.

What role would these corporations serve in a world in which replicators were as ubiquitous as cell phones?

Because they hold patents on their intellectual property, they would undoubtedly charge you a service fee for replicating them. Interested in putting genuine Exxon fuel in your car (assuming we still use internal combustion engines)? Service fee. Hungry for a McDonald’s Big Mac? Swipe your card here. Got your eye on the latest Apple iDevice? Pay up.

Already, 3D printers are raising sticky copyright issues. But as more and more people own 3D scanners and printers, the problem will become too great for corporations to manage, as we are currently seeing in the entertainment industry. Even if the devices come with preventive measures, hacking will become widespread.

It’s perfectly conceivable, in fact, that once 3D printers are as powerful as the replicators featured in Star Trek, intellectual property will no longer be relevant. Everything will be open source, readily available and, indeed, modifiable to anyone with access to such technology. Because if acquiring wealth is no longer important, what would motivate an individual or corporation to legally protect intellectual property?


Planet Open Source

The most fervent capitalists will inevitably balk at the idea of an entirely open source world. This is just techno-socialism, they might say. If no one owns his or her ideas, competition would cease and innovation would die. Plus, if no one works to acquire wealth and make a living, idleness and crime will prevail. While these are valid arguments, there are a couple of strong counterarguments.

For the majority of human history, we’ve done without copyrighting, trademarking, patenting and other ways of protecting intellectual property. And yet somehow we mustered the drive and curiosity and ingenuity that’s required to make great strides in every area of human knowledge: science, art, literature, music, metallurgy, woodworking, astronomy, agriculture, fashion. Good thing, too: imagine if Gurg had been allowed to take out a patent on his invention, the wheel. It’s absurd to think about.


Secondly, the availability of replicators would not lead to idleness and crime. In fact, it would have the exact opposite effect on society. Crimes of passion such as rape and murder might still exist, but with everyone’s basic needs met and poverty and desperation eliminated, there would be little reason to steal. You could make the case that crimes would still be committed by those with mental disorders, but replicators would give the afflicted free access to the best medications, so long as they were responsible enough to take them regularly or had the help of a family member or social worker.

And as for idleness: free of the stress and inconvenience of having to work for a business or company that means little to you other than a way to pay the bills, people might then have the time and energy to pursue other goals in life. They could “work to better [themselves] and the rest of humanity,” instead of a corporation’s bottom line. Rather than greed and cutthroat competition, the driving forces in society would be self-improvement and collaboration.

Like all the futuristic technology featured in Star Trek — or any sci-fi, for that matter — replicators seem too distant a notion to ever become a reality. But a reality it will one day become: we are already witnessing its inception with the 3D printer. Coupled with the power of open sourcing, universal replication will help usher in a new kind of economy, one that doesn’t favor the few and necessitate the arbitrary commodification of goods, services and human life.

%d bloggers like this: