We’re taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what’s at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Last year, YouTube’s Content ID system flagged Sebastian Tomczak’s video five times for copyright infringement. The video wasn’t a supercut of Marvel movies or the latest Girl Talk mashup; it was simply ten hours of machine-generated static. Stories like Tomczak’s are all too common: Content ID even flagged a one-hour video of a cat purring as a likely infringement.
Filters are most useful when they serve as an aid to human review. But today’s mandatory filtering proposals turn that equation on its head.
But those are only a small glimpse of a potential Internet future. Today, with the European Parliament days away from deciding whether to pass a law that would effectively make it mandatory for online platforms to use automated filters, the world is confronting the role that copyright bots like Content ID should play on the Internet. Here in the US, Hollywood lobbyists have pushed similar proposals that would make platforms’ safe harbor status contingent on using bots to remove allegedly infringing material before any human sees it.
Stories like the purring and static videos are extreme examples of the flaws in copyright filtering systems—instances where nothing was copied at all, but a bot still flagged it as infringement. More often, filters ding uploads that do feature some portion of a copyrighted work, but where even the most basic human review would recognize the use as noninfringing. Those instances demonstrate how dangerous it is to let bots make the final decision about whether a work should stay online. We can’t put the machines in charge of our speech.
Mandatory Filters Are a Step Too Far
A decade ago, online platforms looked to copyright filtering regimes as a means to generate revenue for creators and curry favor with content industries. Under U.S. law, there’s nothing requiring platforms to filter uploads for copyright infringement: so long as they comply with the Digital Millennium Copyright Act’s notice-and-takedown procedure, the law protects them from monetary liability based on the allegedly infringing activities of their users or other third parties.
But big rightsholders pressured platforms to do more: YouTube built Content ID in 2007, partially in response to a flurry of lawsuits from big media companies. Since then, Content ID has consistently grown and expanded in scope—with a version of the service now available to any YouTube channel with over 100,000 subscribers. Other companies have followed suit—Facebook now uses a similar filter that even inspects users’ private videos. Now, both in Europe and the United States, lobbyists have pointed to those filters to argue that lawmakers should require every web platform to take even more drastic measures.
That would be a huge step in the wrong direction. Filters are most useful when they serve as an aid to human review. But today’s mandatory filtering proposals turn that equation on its head, forcing platforms to remove uploads—including completely legitimate ones—before a human has a chance to review them.
Hollywood Goes All In on Filtering
The debate over mandatory filters isn’t just about copyright infringement. Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) last year, a misguided and unconstitutional law that forces Internet companies to become more restrictive in the types of speech they allow on their platforms, while doing zilch to fight sex traffickers.
For Hollywood lobbyists, FOSTA is just one step toward the goal of a more highly filtered Internet.
During the debates over FOSTA, the bill’s supporters claimed that companies would have no problem deploying filters that could take down unlawful speech while leaving everything else intact. A highly touted letter from Oracle suggested that the technology to make those decisions with perfect accuracy was accessible to any startup.
That’s absurd: by exposing platforms to overwhelming criminal and civil liability for their users’ actions, the law forces platforms to calibrate their filters to err on the side of censorship, silencing innocent people in the process.
It might come as no surprise, then, that two of FOSTA’s biggest supporters were Disney and 20th Century Fox. For Hollywood lobbyists, FOSTA is just one step toward the goal of a more highly filtered Internet.
Don’t Write Bots into the Law
Like FOSTA, mandatory filtering proposals represent a kind of magical thinking about what technology can do. It’s the “nerd harder” problem, a belief that tech will automatically advance to fit policymakers’ specifications if they only pass a law requiring it to do so. The reality: Bots can be useful for weeding out cases of obvious infringement and obvious non-infringement, but they can’t be trusted to identify and allow many instances of fair use.
Unfortunately, as the use of copyright bots has become more widespread, artists have increasingly had to cater their work to the bots’ definitions of infringement rather than fully enjoy the freedoms fair use was designed to protect. To write bots into the law would make the problem much worse.
Whether it’s fighting copyright infringement or fighting criminal behavior online, it may be tempting to believe that more reliance on automation will solve the problem. In reality, when we let computers make the final decision about what types of speech are allowed online, we build a wall around our freedom of expression.
[“source=eff.org”]