The Brazilian Senate rushes to approve a draft bill that seriously undermines privacy and free expression. Named as the “Fake News Law,” PLS 2630/2020 aims to tackle an intricate problem, whose responses must be carefully designed in a democratic and participatory manner. Contrary to the Brazilian Civil Rights Framework for the Internet, law approved in 2014 with broad and intense social participation, PLS 2360/2020 is marked by a rushed debate, during a time of exceptional operation of legislative activities due to the COVID pandemic.
After an alarming and not officially filed version of the text was almost put to a vote last Tuesday, the bill’s original author presented a substitute text, and there are other proposals under discussion. Seeking to curb the spread of disinformation online, the bill lacks precision to avoid abusive reporting and interpretations. It also creates criminal offenses, prohibitions, and obligations that hamper legitimate ways of expressing ourselves online and severely expose users’ communications.
We want to emphasize some of the bill’s most concerning points:
Providers Are Required to Retain the Chain of Forwarded Communications for a Year
Social networks and private messaging applications would be obliged to keep the chain of all communications that have been forwarded tracking all its nodes, regardless of the distribution of the content was done maliciously at the source or along the chain. This is a massive data retention obligation, which affects millions of users instead of only those investigated for an illegal act. Although Brazil already has obligations for retaining specific communications metadata, the proposed rule goes further. Knowing the chain of forwarded communications implies that a certain content is linked to the metadata track record; and it will be used exactly for tracing back the origin of a content that is already known.
This obligation erodes key protections afforded by end-to-end encrypted applications, aimed at exactly ensuring confidential communications. And there are critical reasons to protect those. The communication between journalists and their sources, law enforcement authorities discussing sensitive aspects of investigations against a powerful politician, communities organizing ways to resist harassment, and several more. Piecing together a communication chain may reveal highly sensitive aspects of individuals, groups, and their interactions, even when none of them is actually involved with illegitimate activities. The avenues are open for abuse.
The proposal requires a warrant before providers hand over the chain of metadata, but this access is not limited to criminal cases. Any interested party could require this information to a judge in a civil suit. In addition, the parameters that a judge should consider to authorize the measure provide a much lower barrier than the factual basis established in Brazil’s Telephone Interception Law. The stakes are higher for vulnerable communities, activists, social movements, journalists, particularly in local contexts of dispute and harassment. Recall that even with the stronger protections of the Interception Law, the Inter-American Court of Human Rights has condemned Brazil for illegal interception against social movement’s representatives where safeguards were not properly fulfilled. On the other hand, the accurate reconstruction of the forwarding chain has its own challenges. The use of techniques such as hash-matching for complying with this obligation is not immune to circumvention, since changing any bit of the message would lead to a new hash and affect the chain’s tracking.
Furthermore, this obligation disregards the way more decentralized communication architectures work. It assumes that application providers are always able to identify and distinguish forwarded and non-forwarded content, and also able to identify the origin of a forwarded message. This depends in practice on the service architecture and on the relation between the application and the service. When the two are independent it is common that the service cannot differentiate between forwarded and non-forwarded content, and that the application does not store the forwarding history except on the user’s device. This architectural separation is very traditional in Internet services and while it is less common today in the most used private messaging applications, the obligation would limit the use of XMPP or similar solutions. The obligation could also negatively impact open source messaging applications, designed to afford users not only to understand but also to change their functionalities.
It Restricts, And Potentially Criminalize, Legitimate Speech
Even not restricting specific content, the proposal prohibits “inauthentic accounts” and “unidentified automated accounts,” the latter when the user does not inform the use of automation for the provider and the general public. At least for “inauthentic accounts,” this would entail a legal general monitoring obligation of users’ identity, with severe implications for privacy and free expression. What’s more, the broad definitions proposed are broad and endanger legitimate speech. Accounts commonly combine automated and non-automated actions. The proposal seeks to curb malicious “cyborg” activities, but strikes a myriad of other uses. Companies and large organizations have tools to help manage social networks that allow different employees to post without having direct access to the account and its credentials — which does not turn this account into a bot.
More critically, many civil society campaigns use tools that allow users posting standard messages through their own profiles, also not bots. Tools that are crucial to public debate and accounts joining these campaigns would be a compelling target for abusive reports and censorship. Similar abuses happen, time and again, based on platforms’ terms of services and disproportionately affects vulnerable groups.
To make matters worse, these overbroad definitions could serve to criminalize activists, social movements and organizations. The proposal includes the operation of inauthentic accounts, unidentified automated accounts, or unidentified artificial distribution networks as criminal offenses in the country’s Criminal Organizations and Money Laundering Laws, with high prison penalties (up to 8 or 10 years, respectively, without aggravating factors). Moreover, these laws authorize prosecutors and police to access users’ subscriber data without a court order in criminal investigations.
It Compels Applications to Identify All Users
The bill compels large social networks and private messaging applications to require all users their identification and location. This is a retroactive requirement, meaning that all those who already have accounts will have to present a valid identification document. Proposals like this always come with a series of hazards. Although the bill ensures the use of a pseudonym for the general public and requires a judicial order to obtain a user’s real identity, it contains substantial exceptions. The criminal offenses created by the bill, included in the Criminal Organizations and Money Laundering Laws, could serve to authorize the Chief of the Civil Police and prosecutors to access users’ subscriber data, including their name and addresses, without a warrant.
Furthermore, compelling applications to have more information on users, such as their ID documentation, increases the risks connected to data breaches and related crimes, such as identity theft and fraud — a risk that doesn’t spare large providers like Facebook or Twitter. In fact, attempts to “establish authentic identity” for curbing disinformation online tend to lead to increasing and disproportionate data collection given the task’s inherent hurdles. Even in narrower problem spaces, such as payment systems, identity remains plagued with fraud, and non-negligible privacy risks. We should recall we are primarily dealing with bad actors ready to explore ways to circumvent the law. Fake non-digital identities can be used, paperwork can be impersonated, and malicious actors can use scale to weaponize identity systems, also in a way to discredit parties.
It Strikes Free Expression Once Again With Blocking Penalties
All of the above problems are compounded by the fact that the bill’s penalties include the temporary suspension of the applications’ activities. But such suspensions are unjustifiable and disproportionate. They would curtail the communications of millions of Brazilians that rely on these providers to chat, work, access information, and widely express their ideas. In addition, application providers under the threat of being blocked will harshly apply the rules to protect their activities, restricting accounts in the face of the slightest doubt. Although the bill establishes notice and appeal processes, platforms still largely fail to offer them in a meaningful way, and legitimate accounts will be systematically silenced.
Source: EFF.org
Veridiana coordinates EFF’s activities with local organizations and activists in Latin America and the Caribbean, where we work together to reinforce the defense of digital and human rights. Veridiana has been involved with telecommunications, media, Internet and human rights issues since 2009. She has been a member of Brazilian Internet Steering Committee (CGI.br) as one of the civil society representatives (2010-2013) and worked in Brazilian civil organizations such as Idec and Intervozes. Veridiana is a lawyer and holds a Masters degree in Economic Law from the University of São Paulo Law School, where she is currently a PhD candidate in Human Rights.
Image: Pixabay
Subscribe to Activist Post for truth, peace, and freedom news. Become an Activist Post Patron for as little as $1 per month at Patreon. Follow us on SoMee, HIVE, Flote, Minds, and Twitter.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Be the first to comment on "New Hasty Attempt to Tackle Fake News in Brazil Heavily Strikes Privacy and Free Expression"