When I went public with revelations that Cambridge Analytica had harvested 87 million profiles using an app that Facebook had authorized, Facebook's first reaction was to immediately ban me — not only from Facebook but also from all the other companies that Facebook owns. That means I couldn't even go on Instagram. All the apps that used Facebook authentication just showed errors — so my Tinder, Uber and countless other apps stopped working.
Within a day of coming out as "the first Millennial whistleblower," I was deleted from the Internet.
One week later, my ban was debated in the British Parliament. The UK Secretary of State for Culture, Media and Sport said that, "Of all the different things that have surprised me and shocked me in this revelation, the decision by Facebook to take down the whistleblower's Facebook account, and the removal of their WhatsApp account and the Instagram account, was the most surprising."
He went on to call the ban "outrageous" — because it revealed the unrestrained power technology companies have over users when a person's entire online presence can be so abruptly eliminated from existence. There is no due process or check on this power, and Facebook's decision to ban whistleblowers raises a serious question for our society.
"ISIS is a digital-first brand."
What happens to our democracy when tech companies can delete people at will who dissent, scrutinize or speak out?
Facebook's actions against me also show the serious consequences of Silicon Valley's rush to consolidate the ownership of different platforms. This unchecked monopoly on digital space presents a serious risk to people's rights.
We need to think carefully about what we are letting Silicon Valley create.
Every year, people are buying more and more Internet-enabled devices and appliances. Soon the so-called "Internet of Things" will become the norm in our households. Algorithms will be driving our cars and organizing our lives. People are just starting to put first-generation AI into their homes, like Amazon's Alexa or Google Home. Now even Facebook has come out with an AI-enabled smart home device called "Portal" — complete with cameras connected to their ad network. What a future we have to look forward to when Facebook watches you eat breakfast and babysit your kids.
These devices are just the first step toward the eventual integration of cyberspace with our temporal physical space. We are creating a future where our homes will think about us. Where our cars and offices think about us. Where our streets and buildings have motivations and intentions. These ambient AI systems will be joined into interconnected information networks — omnipresent, watching, seeking to influence and optimize us — and yet remain invisible to us. It's beyond creepy; it sounds almost divine.
Related | Break the Internet: Amanda Bynes
What will it mean to be human when for the first time in history we will be living in spaces that think about us and for us?
With the emergence of AI, it will be the first time that being a human means that we aren't the ones mastering our environment — our environment may come to master us. You are becoming an object of optimization, and this should make everyone pause. Throughout human history, only a few industries relied on transforming people into products — the slave trade, the sex trade and the organ trade. We are now at the precipice of adding a fourth industry to this list — the data trade. It will be human trafficking of our data selves, where your identity and your behavior is a product.
"We are living in the aftermath of the detonation of a cyber WMD."
Silicon Valley uses language to obfuscate and hide what they do. Surveillance networks are called "communities," the used are "users," server facilities are "clouds," addictive design is called "user experience" and people's identities are profiled from "data exhaust" or "digital breadcrumbs" — as if human behavior itself is simply a worthless waste product.
The civil rights movement was fought to de-segregate society, but under the auspices of "personalization," algorithms are beginning to re-segregate society into echo chambers and cognitive monocultures. This system is becoming a regime that has the power to record, modify and commodify human behavior — a modern-day manifestation of the panopticon. Behavior becomes a resource. Reality itself is continuously and infinitely tweaked as computers learn to manipulate peoples' behaviors to generate the most amount of profit. Being human is being disrupted.
Companies like Facebook see "cyberspace" as a lawless frontier, much in the same way early colonists saw America — as a new empty territory open for the taking, as what colonists called the Terra Nova. At first contact, there were Indigenous cultures who thought Europeans were divine messengers with their new technologies like steel, gunpowder and tall ships. Today we lionize and impart start-up founders with the same godlike status — their apps and algorithms are the new steel and gunpowder.
But as Indigenous people quickly learned, these were no gods but merely conquerors and empires seeking to exploit new resources. In the 21st century, programs like Facebook's "Free Basics" serve to help Facebook capture entire peoples and monopolize a mining operation — but instead of gold or oil, data is the sought-after resource. But the problem is that data cannot be uncoupled from your identity and your humanity.
Just as they did in old colonial wars, today's superpowers are fighting over controlling this Terra Nova and its people and resources. What happened in the US elections with Russian interference was the beginning of a new form of warfare. And this new warfare has birthed new types of weapons of mass destruction — weapons able to cause widespread and enduring devastation. Those who are fighting with these new informational weapons systems have the ability to en masse impact citizens, cities and entire societies. The effects will play out and endure over generations — just look at the chaos caused by election of Donald Trump or the Brexit referendum.
To me, we are living in the aftermath of the detonation of a cyber WMD.
We protect our borders at land, sea and air with dedicated public agencies. We do not leave this critical public service to private companies or land owners. We should protect our digital spaces with the same level of care. As I testified to the US Senate, the security of communities online is one of the most pressing national security issues in the 21st century. This is not an emerging problem on the horizon. This is not a niche issue. This is a problem today, in the here and now, affecting the literal billions of people globally who use social media.
Our civil defense agencies need a clearer understanding of the rules of engagement for informational attacks. If Russia had dropped propaganda leaflets by airplane over Florida or Michigan, that would universally be condemned as a hostile act. We'd shoot down the plane. But this is exactly what is happening online. We must address these issues before disinformation and information warfare become pervasive in cyberspace.
We also need to understand that some of the biggest threats to our society incubate online. ISIS is not just a terrorist organization — ISIS is a digital-first brand. They spread their propaganda online. They organize online. They recruit online. And yet the American military still acts like boys with toys — buying new tanks and missile systems, not hiring programmers and data scientists.
But would the military ever hire someone like me? A guy with a nose ring and neon-colored hair, who comes in late and sashays around the office whenever he feels like? No. And frankly, I don't know a single programmer who would pass their drug test. So the military loses out on a lot of talent, and our cyber defense suffers due to a lack of human diversity.
Data is becoming our generation's electricity. It is powerful, it is useful and it can be dangerous. And just like the electricity that surrounds us, we cannot escape data. Honestly, what job can you get if you refuse to use Google, Facebook or LinkedIn? Can you really have a social life without your mobile phone? Without Instagram and WhatsApp? These are not real choices people have. To live in modern society, people have to use these platforms.
Online platforms' terms and conditions present us with a false choice, because using the Internet is no longer a choice. We cannot opt out of the 21st century. So let's stop pretending like people have a real choice when they click "accept." We need to stop with the false dichotomy between our privacy rights and living in a modern digitized society.
"Within a day of coming out as the 'first Millennial whistleblower,' I was deleted from the Internet."
This is why we need to end the Wild West of cyberspace. Without regulation, the Internet, data and AI systems can — and will — be misused. We cannot keep relying on the promises, apologies or good intentions of tech companies to protect citizens. They have failed too many times to deserve our trust.
The Internet is a vital public utility — like electricity, water and roads. So let's treat it like one — and regulate it like one. We need to stop talking about these platforms as if they are a service. Social media is not a service — it is an architecture. And it is an architecture that we do not have a choice but to use. We do not let people "opt in" to buildings with faulty wiring that lack fire exits. That would be unsafe — and no "terms and conditions" pasted on a door would let any architect get away with building dangerous spaces. So why should the engineering and architecture of software and online platforms be any different?
We need a new social contract for the Internet — get rid of these platforms' policies and replace them all with a single set of Universal Terms and Conditions based on principles that put people first. And although the Internet does not exist within a single jurisdiction, we have international treaties on the most random things, like what happens to lost luggage on international flights (it's called the Montreal Convention). Surely we can come up with some kind of framework for something that is fundamentally important to the functioning of our collective species.
There is a culture in Silicon Valley that treats ethics as secondary to innovation. This needs to stop. Doctors, lawyers, accountants, teachers and many other professions are required to comply with a statutory code of ethics. If physical engineers and architects have to comply with professional standards, why not software engineers and database architects? The people designing the Internet need to start considering the impact their experiments may have on the future of our society. And when they don't, there should be consequences — just as a doctor or lawyer cannot act unethically without running the risk of losing their licenses to practice.
But ultimately, change will have to start with these tech companies. Facebook's mantra is famously "Move fast and break things." But at what cost? We should not move fast into this new frontier if it means we break our society in the process.
Illustrations: Austin Call (@duhrivative)