Good Ends, Bad Means? The EU’s Struggle To Protect Copyright and Freedom of Speech
from Net Politics and Digital and Cyberspace Policy Program

Good Ends, Bad Means? The EU’s Struggle To Protect Copyright and Freedom of Speech

In its controversial copyright overhaul, the EU struggles to balance intellectual property protection with the free use of the internet. 
People protest against the planned EU copyright reform in Berlin, Germany March 23, 2019.
People protest against the planned EU copyright reform in Berlin, Germany March 23, 2019. Reuters/Hannibal Hanschke

Amélie P. Heldt is a junior legal researcher and PhD candidate at the Leibniz Institute for Media Research/Hans-Bredow-Institute, Hamburg, an associated researcher at the Humboldt Institute for Internet and Society, and currently a visiting fellow with the Information Society Project at Yale Law School. Follow her at @amelie_hldt.

On March 26, 2019, the European Union (EU) Parliament voted for a new regulatory framework seeking to harmonize copyright laws between member states. While many believed that previous EU copyright legislation could no longer effectively regulate the digital economy, and the new directive is intended to improve the rights of creatives and news publishers, the reform has become the biggest controversy in EU regulation in recent memory, symbolizing an intergenerational clash of culture between policymakers and digital natives. Most agree on the legitimacy of the initial goal, but there has been a growing resistance against the probable means to achieve it—upload filters. Furthermore, the implementation of these filters has sparked a heated debate on “the free internet,” a discussion that has often lacked objectivity on both sides.

More on:

European Union

Intellectual Property

Censorship and Freedom of Expression

The primary goals of the copyright directive are to “stimulate innovation, creativity, investment, and production of new content” and “respect and promote cultural diversity while at the same time bringing European common cultural heritage to the fore.” This essentially means that rights holders are to receive fair compensation for the digital uses of their works. To do so, online content-sharing service providers (OCSSPs) need to pay for the content or they will be substantially liable for copyright infringements by their users. In the directive, OCSSPs are defined as services giving “the public access to copyright-protected works or other protected subject matter uploaded by its users”. This includes intermediaries such as Facebook, YouTube, or Twitter that do not generate original content, but provide a platform for users to share their own or third-party content.

The copyright directive proposal has been under attack for many reasons, but Article 17 (previously Article 13) has proved the most controversial because it is expected to have the most fundamental consequences for the internet’s infrastructure. Before the new copyright directive, OCSSPs’ would benefit from a liability exemption under Article 12 of the EU E-Commerce directive. The first draft of the new directive stipulated that OCSSPs ought to ensure adequate compensation by entering into licensing agreements or by engaging in “proactive measures” against unlicensed publication on their platform. This wording was heavily criticized for making a pre-publication test mandatory for all content uploaded. The negotiations thereafter gave rise to the amended version now adopted, which does not stipulate an explicit obligation for OCSSPs to implement upload filters. Rather the providers’ liability for copyright infringements could eventually result in implementing filtering systems, which is why experts recommended voting against it.

Upload filters have been attacked for several reasons: first, they filter all user-generated content uploaded to a platform, without a given suspicion. This type of vast information scanning has been condemned by the Court of Justice of the European Union twice, in its Scarlet v. Sabam and Sabam v. Netlog decisions. Second, they are not (yet) fit for purpose which means that the technology is not capable of recognizing unwanted text or of differentiating between copyright infringements and the legitimate use of protected content such as fair use, parody, satire, etc. Filters recognize and match hashes, but they cannot offer cognitive performance similar to that of a human reviewer. Although they can be trained to recognize similar content again through the hashes attributed to it (re-upload filters), which has been successful in removing disturbing content such as child pornography (e.g. Microsoft’s PhotoDNA), upload filters are considered very error-prone. Their deployment is more likely to result in the overblocking of creative content in cases where there is doubt about the copyright. Third, upload filters act before publication, which makes them a very convenient tool for censorship in its formal definition, that is, legally speaking, the consequence of the obligation to submit a medium to a state agency for prior approval of the publication. In sum, this technology will most certainly violate the users’ freedom of expression and information if deployed as described. It will also limit the platforms’ freedom, but this is, in contrast, justified by the purpose of copyright protection and proportionate.

Despite all the legitimate criticism of upload filters, we should not lose sight of the copyright holders who have seen their work being used online without any consideration. While users are concerned about their continued freedom to use social media platforms, creatives might welcome this new regulation, which could force OCSSPs to license protected work instead of benefiting from its dissemination for free. To comply with Article 17, OCSSPs also have the option of reaching license agreements, but it would be more costly and therefore less attractive, which increases the likelihood of filter technology use.

Even if companies strictly adhere to copyright protection, the negative effects of upload filters for society as a whole should not be underestimated: they will reinforce the gatekeeper position of the biggest platforms and have a big impact on the public discourse, not to mention the spillover effect on smaller platforms who will follow in order to not suffer from a competitive disadvantage. The controversy sparked protests in many major European cities, and online, where Wikipedia shut down its German website to support the opponents of Article 17. So called Youtubers, popular amongst millennials, were particularly active in calling on their followers to join the protest. The extremely bad publicity for the EU as an institution just two months before the elections of a new European parliament is another knock-on effect. It is unfortunate that the new directive did not find a way to reform copyright law that is compatible with the digital sphere or at least to combine the protection of intellectual property with a non-restrictive use of the internet.

More on:

European Union

Intellectual Property

Censorship and Freedom of Expression

All hopes for an implementation of the directive that would live up to the expectations of both sides now lie with the member states who gave their final approval on Monday and will have twenty-four months to transpose the directive into their domestic law.  When doing so, the member states have to respect the minimum standard set by the directive, but are free to write out undefined clauses. With regards to Article 17, they could for example limit the deployment of upload-filters under specific terms or prioritize other instruments such as licensing.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail