A few weeks ago, we wrote about a troubling provision that the Senate Intelligence Committee had inserted into this year's intelligence authorization bill, which would require social networks to report to the government
any "terrorist activity" they see on their systems. As we noted, this has all sorts of problems, and seems more designed to (1) generate headlines and (2) chill free speech than do anything useful. Thankfully, Senator Ron Wyden has put a hold on the bill
specifically over this provision.
“There is no question that tracking terrorist activity and preventing online terrorist recruitment should be top priorities for law enforcement and intelligence agencies,” Wyden said, in a statement for the record today. “But I haven’t yet heard any law enforcement or intelligence agencies suggest that this provision will actually help catch terrorists, and I take the concerns that have been raised about its breadth and vagueness seriously.”
“Internet companies should not be subject to broad requirements to police the speech of their users,”Wyden continued.
But the issue goes even deeper than that. As Markham Erickson has written, there are significant free speech concerns
raised by this provision, in large part because "terrorist activity" is not defined at all. Anywhere. It's just this vague term -- and given that companies may face liability for not reporting "terrorist activity" to the government, you can bet an awful lot of perfectly fine and protected speech is going to get reported. And that's worrisome.
A key problem with Section 603, however, is that the trigger for the reporting mandate is based on the vague and undefined term “terrorist activity.” This term is not a term of art in the US criminal code and arguably goes well beyond criminal activity to speech that is protected under the First Amendment.
Erickson also points out that the comparison that supporters have made of this bill to one that requires companies to report child porn, is that child porn is "per se unlawful and never protected speech" under the US Constitution. But "terrorist activity" is just vague.
The NCMEC reporting obligations, however, relate to images that are per se unlawful and are never protected speech under the US Constitution. A government mandate that an Internet company report facts and circumstances connected to the vague and overbroad term “terrorist activity” certainly would result in overbroad reporting to the government of speech that is protected under the First Amendment.
And, on top of that, this move would give other countries a blueprint for how to demand tech companies hand over information on users:
More troubling, if adopted, the provision would serve as a global template for other countries to impose reporting requirements for activities those jurisdictions deem unlawful. This would be particularly problematic with countries that regulate speech, including political speech, and with authoritarian regimes that would demand that Internet companies police their citizens’ activities.
And, finally, as noted, with such a vague term, and the threat of serious liability, companies are going to be pressured into serious over-reporting:
Section 603 also creates a practical compliance problem. Because no one knows the definition of “terrorist activity,” how does one counsel a client to establish a compliance protocol under the proposal?
Any company would be at risk that if it did not report “terrorist activity,” it could be liable if there were a subsequent event that resulted in loss of life, limb, or property. Likely, this would result in designing a protocol to over-report anything that could be considered “terrorist activity.” Given the massive scale of content shared and created on the Internet daily, this would result in reporting of items that are not likely to be of material concern to public safety and would create a “needle in the haystack” problem for law enforcement. This serves no one’s purposes and adds privacy concerns to the First Amendment concerns noted above.
This creates a perverse incentive for a company to avoid obtaining knowledge of any activity that would trigger the reporting requirement—the exact opposite of what the proponents of the legislation want. Yet, designing such an avoidance protocol is nearly impossible. If even one low-level employee received an over-the-transom email about a “terrorist activity,” knowledge of the activity can be imputed to the entire company – exacerbating the potential liability faced by an Internet company.
Of course, these days, it seems like most in the Senate go by headlines rather than actual understanding of the issues. Hopefully, at least this one time, they'll actually listen to Senator Wyden.