What, exactly, did Facebook, Twitter, Google, and other tech giants do to empower or enable bad actors (foreign governments, radical organizations, Russians) to influence the outcome of the 2016 elections? How did it happen? Who is to blame? How can we prevent it from happening again?
Congress has an idea. Sen. Mark Warner (D-Virginia) and Sen. Amy Klobuchar (D-Minnesota) have urged their colleagues and constituents to support a bill that will regulate campaign ads running on digital platforms. That sounds like a plan, but it really won’t produce the desired outcome. In fact, the very concept of this bill makes assumptions about the nature of communications, command and control that are no longer valid.
There is no central authority
The internet is not TV or radio—highly regulated, government-granted monopolies propagated over airwaves controlled by the government. The internet is not a newspaper with an editor-in-chief, a publisher, a physical address and a license to conduct business. The internet is not Facebook, Twitter, Google or any group of tech companies. The internet is not Verizon, AT&T, Comcast or any carrier or network. The internet is not “in the cloud,” at your internet service provider or located anywhere.
The internet is everywhere. It is a vast computer network (made up of millions of smaller networks) using a specific set of communications protocols. It cannot be shut down, it cannot be disabled, and it cannot be destroyed. Most importantly, it cannot be regulated because there is no central authority to regulate.
Can’t we regulate something?
Just for the sake of argument, let’s pretend a law was passed that required Facebook, Twitter, Google, and other online outlets to apply the Bipartisan Campaign Reform Act of 2002, aka the McCain-Feingold Act, to paid political advertising. You would start seeing and hearing the tagline you’ve come to know and ignore: “I’m so-and-so candidate and I approve this message.”
Great! Now we’ve regulated political ads online. Except it won’t change a thing. Legitimate political advertising, even super-well-targeted attack ads filled with misinformation, is not what’s at play here. We are dealing with something much, much harder to identify and impossible to stop: people communicating with each other.
Here’s what we’re up against
I wrote about this back in June 2017 and in October 2015. The methodology used by bad actors to exert pressure and social influence on the unsuspecting public is self-organizing.
Most self-organizing systems work by giving a few very simple instructions to completely autonomous entities and letting them act on their own.
The strategy requires four cohorts: ideators, propagators, supporters, and executors. Ideators will come up with the instructions and post them. Propagators will share, retweet or otherwise propagate the instructions. Supporters will not propagate but will tacitly support the propagators. Executors will carry out the instructions.