How Afraid of AI Should We Be?
Transformation

How Afraid of AI Should We Be?

Whereas talk of doomsday scenarios about killer robots destroying mankind were once the stuff of science fiction nightmares, increasingly more and more people — including some of the world's leading technologists — are sounding the alarm about the perils of AI. But are our fears misplaced? Should we be more afraid of robots or the humans who make and use them?

Somewhere between the '90s and now, a suspicion began to take root among even the most tech-illiterate that we are living in The Future. Not the nebulous experience of the future as a block of time relative to our past and present selves and sandwiched somewhere murky between birth and death, but The Future, as in, that imagined faraway destiny where fashion designers inexplicably (and exclusively) work in silver palettes, cars drive themselves and fly overhead, the service industry is populated by accommodating robots, distance means nothing and interpersonal communication is interceded by technological connections. The subject of many special episodes of sitcoms and cartoons, the post-Y2K world has been spinning faster than we've noticed, landing us squarely in a post-human present hurtling toward a future that many predict will be just what we imagined — or much, much worse.

The term artificial intelligence is loaded with baggage, mostly in the form of robots with human characteristics that are either kindly and benevolent to humans (think C-3P0) or evil with a kill switch that can't be turned off (see the plot of most mainstream sci-fi films). Really, though, AI refers to the development of computer systems and machines that have the ability to perform tasks that normally require the intelligence of a human. While we currently only have narrow (weak) AI that is designed to perform a specific task, like facial recognition, voice recognition (i.e. Siri) or even driving a car, many researchers have their eyes on the long-term prize of general (AGI) or strong AI, when a machine could successfully perform any intellectual task that a human being can. According to the Future of Life Institute, whose scientific advisory board includes Elon Musk, Morgan Freeman, Swedish philosopher Nick Bostrom, Italian computer scientist Francesa Rossi and several other leading minds, "While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task."

"In a United States context, I think one of the most difficult concepts to grasp is that we as a people are no longer in control of the directions in which our society is moving," says N. Katherine Hayles, professor and Director of Graduate Studies in the Program in Literature at Duke University and the author of How We Became Posthuman, a work considered to be "the key text which brought posthumanism to broad international attention," according to the Kilden Journal of Gender Research.

A feeling of despair founded on the disorienting sensation of losing control over one's environment is a hallmark of certain types of depression, and certainly the proliferation of AI has been met with skepticism and criticism that humans will lose their connection to each other and — perhaps even more urgently, in a capitalist society — their spot in the workforce.


"We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny."


While new realities like self-checkout at the grocery store, the table side replacement of waiters with iPads, Google Maps' traffic predictions and the ability of smart assistants like Alexa to make your appointments and the like are so commonplace as to be banal, flashier moments that hint at the future of AI's capabilities and potential roles in our lives like drones hovering down the runway at Fashion Week carrying handbags, sex dolls you can make orgasm with the right algorithmic touch, Facebook chatbots that create their own languages, self-driving trucks to deliver our goods and eerily lifelike robots like Hanson Robotics' crown jewel Sophia induce the sort of existential anxiety that is felt at a primal, cellular level.

From 2001: A Space Odyssey to Blade Runner, it's somehow easy for us to imagine worlds in which another form of intelligence threatens to supersede ours. At a more practical level, the human labor that AI renders unnecessary represents the loss of jobs at a grand scale. People who lose their jobs to technology may not have the skill sets required to take on a new job in the ever-growing technology sector. In a capitalist society, where does that leave us?

It's a worry pressing enough that the Future of Life created a section on its site dedicated solely to "Existential Risk" that includes charts, graphics and statistics on just how founded our fear of AI is. Weighing myths versus facts, as scientists are wont to do, the institute's tone is that of a kindly, rational yet empathetic older professor who knows that the aliens are real but is choosing not to reveal everything to you yet lest you totally freak out because you don't know how to work with them.

"During the early years of trains," the Institute gently reminds us, "many worried that the human body couldn't handle speeds greater than 30 miles per hour; people were hesitant to use the first phones for fear of electric shocks or that the devices were instruments of the devil himself; and there were equally dire predictions about planes, heart transplants and Y2K, just to name a few red herrings. While we hope that concerns about [some technologies] prove equally unwarranted, we can only ensure that to be the case with sufficient education, research and intervention. We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny."


"There is great opportunity to improve lives with AI, but if the technology is not developed safely, there is also the chance that someone could accidentally or intentionally unleash an AI system that ultimately causes the elimination of humanity."


That being said, while researchers may stress that we needn't fear that robots (like the roaming Knightscope security robot that mowed down a toddler at a California shopping mall in 2016) will suddenly "turn evil" and start attacking us, there is concern, even among some of the top AI researchers and tech innovators in the world (Bill Gates, Elon Musk and formerly the late Stephen Hawking), about the dangerous misapplications of AI. At the largest technology conference in the world, Web Summit in Lisbon, Portugal, last November, Hawking said, "AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."

One thing most top researchers seem to agree on is that the main concern regarding AI is not with malevolent robots, but, as the Future of Life Institute explains, "with intelligence itself: specifically, intelligence whose goals are misaligned with ours." The institute gives several examples of ways AI with goals "misaligned with ours" could operate to hurt society: "outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." The warn of a "a super-intelligent and super-wealthy AI" that "could easily pay or manipulate many humans to unwittingly do its bidding."

Related | Woman Killed by Self-Driving Uber in Arizona

These researchers know better than anyone the power technology holds, and while the machines themselves may currently exist in a plane of moral neutrality, centuries of human history have taught us that people are capable of grand-scale evil. Bluntly, the Future of Life Institute states, "There is great opportunity to improve lives with AI, but if the technology is not developed safely, there is also the chance that someone could accidentally or intentionally unleash an AI system that ultimately causes the elimination of humanity." In short, worry less about the machines themselves and more about the goals of the people making them.

A report titled "The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation" published in February details exactly how AI could become both a conduit and a creator of disaster. The report, which draws on the expertise of 26 AI experts, including top minds from Elon Musk's non-profit research firm OpenAI, Cambridge University's Centre for the Study of Existential Risk, the Future of Humanity Institute (not to be confused with the Future of Life Institute) and others, details in 100 pages how AI could be manipulated (and in some cases, already is) to cause mass mayhem, destruction and confusion.

The report mentions threats to digital security (e.g. cyberattacks, automated hacking and the use of speech synthesis for impersonation, the last of which has already been done semi-successfully to make convincing but fake videos of Barack Obama); physical security (e.g. malicious swarms of thousands of micro-drones a la the creepy bee episode of Black Mirror, the deployment of autonomous weapons systems — think missiles that could make decisions about where and when to strike on their own — or self-driving vehicles that are hacked to crash into crowds); and political security (e.g. the creation of targeted propaganda through the utilization of mass-collected data along the lines of what Russian hackers did to disrupt the 2016 U.S. presidential election and the manipulation of videos, known as Deepfakes, which have already taken off in the making of hyper-realistic, fake celebrity porn).

Related | Deepfake Porn Is Technically Legal (and Abusive)

One factor amplifying AI anxiety, beyond even the general insecurity, fear and malaise rampant throughout the Western world and beyond, is the impending crisis of climate change. Israeli historian, professor of history at the Hebrew University of Jerusalem and the author of two international bestsellers on, roughly, the history and fate of mankind, Yuval Noah Harari has said that climate change will be the catalyst that rushes in world-changing technological development. When asked in a recent interview with The Guardian if environmental degradation would halt technological progress, Harari replied, "I think it will be just the opposite — that, as the ecological crisis intensifies, the pressure for technological development will increase, not decrease. I think that the ecological crisis in the 21st century will be analogous to the two world wars in the 20th century in serving to accelerate technological progress."

Indeed, while the quest for human-level, strong AI was long considered the stuff of science-fiction dreams, the recent breakthroughs in AI that we've seen have caused leading experts to rethink their predicted timelines of technological advancement. But these surveys still show disagreement across the board. For instance, while some experts still believe that we are centuries away from human-level AI, many AI researchers at the 2015 AI Safety Conference in Puerto Rico guessed that this holiest of technological grails would happen before the year 2060.


"Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different."


Making scientific and technological predictions for the future is notoriously difficult. For instance, in 1933, less than 24 hours before Leo Szilard's discovery of the nuclear chain reaction, respected nuclear physicist Ernest Rutherford ended a lecture to the British Association for the Advancement of Science by essentially saying the search was atomic energy would inevitably be fruitless, calling it "moonshine." In 1956, Astronomer Royal Sir Richard van der Riet Woolley is said to have called interplanetary space travel "utter bilge." On the flipside, William Gibson famously "predicted the internet" with his 1984 science fiction novel Neuromancer.

When it comes to the future of AI, many of the wildest predictions are related to the functioning of the brain itself, and how it might merge with technology. A start up called Kernel, for example, is working to create a neural prosthetic that can be used to enhance brain capacity even more than the "smart drugs" that have become so popular in tech circles. Elon Musk's brain-computer interface venture Neuralink is focused on creating devices meant to be implanted in the human brain for similar purposes. Engineers at Samsung are already hard at work trying to one-up Google Glass by putting the Internet onto contact lenses. "Smart dust," or tiny AI cameras no bigger than a grain of salt, threatens to follow you around in decades to come, recording your every move (makes you a bit nostalgic for the FBI agent watching you through your laptop). And Ray Kurzweil, a prolific inventor and Google's chief futurist, believes we are approaching the possibility of technology-assisted immortality.

"Most people on record worrying about superhuman AI guess it's still at least decades away," the Future of Life Institute says. "But they argue that as long as we're not 100% sure that it won't happen this century, it's smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it's prudent to start researching them now."


Beyond malicious actors actively seeking to manipulate technology to destructive ends, there's also a certain kind of passive and unexamined bias that's both uniquely and pervasively human influencing everything it touches, including the programming underlying AI.


"Whenever a species enters into a symbiotic relationship with another species, there are benefits as well as risks," Hayles says. "With humans and networks and programmable machines, the benefits are clear: more information flowing through networks, more control mechanisms developed that can handle these exponentially increasing flows, more delegation of mundane tasks to intelligent machines and devices, more complex and faster infrastructural networks. The risks are the new vulnerabilities that arise as one species becomes increasingly dependent on another for its livelihood and indeed, life itself. If the electronic networks crash, either inadvertently or through cyber warfare, millions of humans will die because they will lack water, food distribution, shelter, heat, etc. I consider much more far-fetched the 'Terminator'-type fictions that imagine the machines will take over. Humans invent these machines, control their energy requirements, implement and maintain them, and junk them when they become obsolete. Rather, the changes will be more indirect, more distributed and slower."

To this end, the researchers behind the 100-page doomsday report recommend a four-point action plan to start putting safety guards in place. They suggest that policymakers collaborate closely with technical researchers; that AI engineers "take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities;" researchers and policymakers should develop best practices for addressing dual-use concerns like computer security; and those same parties should get even more people involved, actively seeking "to expand the range of stakeholders and domain experts involved in discussions of these challenges."

And the top companies in the AI field are working toward these solutions. Though they are competitors, Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft have partnered to create The Partnership on AI to Benefit People and Society, a non-profit organization whose goal is "to ensure that applications of AI are beneficial to people and society." Amazon's vice president for global innovation policy Paul Misener says, "We believe that artificial intelligence technologies hold great promise for improving the quality of people's lives and can be used to help humanity address important global challenges."

Beyond malicious actors actively seeking to manipulate technology to destructive ends, there's also a certain kind of passive and unexamined bias that's both uniquely and pervasively human influencing everything it touches, including the programming underlying AI. Researcher Joy Buolamwini found that the AI-powered facial recognition systems of Microsoft, IBM and Chinese company Face++ had 34% more errors identifying dark-skinned women than light-skinned men. It turns out that when software engineers, already predominantly white males due to a historical lack of diversity in STEM fields, train their facial-recognition algorithms with mostly images of other white males, the algorithm itself takes on the bias of the human programming it. Bias and outright hate can also be learned by a machine when exposed to the influences of the outside world; in 2016 Microsoft had to shut down Tay, an AI Twitter bot programmed to have "conversational understanding," just 16 hours after it was launched when it began spewing anti-Semitic and sexist rhetoric it learned from other users online.

And while the implications of racist and sexist intelligent machinery for our everyday lives are as varied as the current problems we face, Hayles argues that the inequality present in society will become exacerbated to an even greater degree by AI, and far beyond the inconveniences people of color and white women already deal with due to bias every day.

"The issues are, who will have access to the intelligent technologies? Who will decide the purposes for which they are used, who will be in control of their development and implementation, and who will be responsible for constraints on their use?" Hayles says. "My prediction is that the results will be complex and not all in one direction. In some respects, human lives will improve in some areas and for some people, and in other respects, human lives will be put at risk and will devolve relative to the prospects available to others."

In his Guardian interview, Harari asserts that "Homo sapiens as we know them will probably disappear within a century or so, not destroyed by killer robots or things like that, but changed and upgraded with biotechnology and artificial intelligence into something else, into something different." Surely access to such biotechnology will be anything but even across class, race and gender, if history teaches us anything. When an entire gender is denied full rights to bodily autonomy, or when populations are discriminated against to the point of disenfranchisement, limited access to health care, education and other resources, as is the case in too many places now, why should we assume that things will be any different when superhuman AI finally arrives?

Acknowledging both this reality and also the looming negative effects of an environment out of whack, Hayles argues, "We desperately need new ideas and discussions that begin from the reality of interdependence, limited resources, sustainability and collective responsibility for the world in which we live and the planet we inhabit." She adds, "If the environment ceases to be habitable by humans or social mechanisms cease to function properly, no one, no matter how rich, privileged or powerful, will be able to insulate himself or herself from the ensuing disasters. It's high time we realize that for better or worse we are all in this together, and plan accordingly."

Photos via Getty