14
May
2024
Pioneering the future: the European Union’s regulatory framework for Artificial Intelligence
As early as October 20201, the European Council set forth an ambitious objective for the EU to become a “leading global player in the development of safe, reliable, and ethical artificial intelligence”.
On April 21, 2021, the European Commission unveiled its draft regulatory, the AI Act, establishing a set of harmonized rules on AI which was adopted by the European Council at the end of 2022.
Following the legislative journey, a provisional agreement was reached between the European Parliament and the Council on December 9, 2023, culminating in three days of intense negotiations.
This legislation was ultimately embraced on Wednesday, March 13, 2024, by a sweeping majority of the European deputies.
But what are the key contributions of this text?
Balancing technology neutrality with asymmetric regulation
The AI Act puts forward a broad definition to encompass all AI systems, aiming to prevent becoming outdated amid technological progress. It defines AI as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.2”
While the text aims to be applicable to a broad spectrum of AI systems, it introduces an asymmetric field of application, moderating its claimed technological neutrality.
The legislation adopts a risk-based approach: risks are classified into three levels, from unacceptable to minimal, each associated with a specific legal regime.
Low-risk AI systems are only subject to light regulation3, while “high-risk” AI applications and systems categorized as presenting “unacceptable risks” are more heavily impacted.
High-risk AI systems under regulation
High-risk AI systems are those posing significant risks to health, safety, or fundamental rights of individuals. The AI Act dedicates its Title III to these systems.
Initially, high-risk AI is defined as systems designed to be used as a safety component or constituting such a product, subject to an ex-ante conformity assessment by a third party.
Additionally, Annex III lists limited high-risk AI systems, which the European Commission can update to stay abreast of technological advancements4.
These AI systems are, in principle, permitted in the European market, provided they comply with regulations set out in Chapter 2 of Title III. These requirements cover various aspects such as risk management systems, data governance, documentation and record-keeping, transparency, user information, human oversight, robustness, and (cyber)security of systems.
These obligations are primarily directed at AI system providers. However, users, manufacturers, or other parties are also subject to obligations outlined in Title III.
Unacceptable risk AIs are (in principle) prohibited.
Title II lists prohibited practices concerning AI systems whose use is deemed unacceptable because they contravene the Union’s values, especially due to potential violations of fundamental rights.
For example, prohibited are5:
• ” the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; (…).”
• “the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following(…).”
An exception exists for real-time remote biometric identification systems in publicly accessible spaces, listed among prohibited practices, which can be authorized for law enforcement purposes, subject to necessity and proportionality conditions.
AI systems with specific manipulation risks are governed by a special regulatory framework
This specific regime applies to certain AI systems, particularly those intended to interact with natural persons or to generate content. This regime applies beyond the risk classification, meaning even low-risk AI systems may be subject to this transparency regime.
These systems will be subject to specific transparency obligations6.
For instance, additional obligations include informing individuals that they are interacting with an AI system or exposed to an emotion recognition system. The regulation also addresses “deep fakes7“, where it is mandatory, except for exceptions, to disclose that content is generated by automated means.
Conclusion
The AI Act is a comprehensive text attempting to regulate artificial intelligence systems, aiming to mitigate the associated risks and foster a trustworthy technology ecosystem.
It’s crucial to highlight that the intent is not to impose heavy regulations on businesses or individuals initiating AI systems. Rather, the goal is to limit the requirements to “the minimum necessary” to address AI risks without unduly restricting technological development or disproportionately increasing the cost of bringing AI solutions to market8.
This rationale is why the text reduces regulatory burdens on SMEs and startups and establishes regulatory sandboxes providing a controlled environment to facilitate the development, testing, and validation of AI systems9.
Now awaiting formal adoption by the Council! The legislation will come into effect 20 days after its publication in the Official Journal and will be fully applicable 24 months later, with some provisions coming into force sooner or later.
Let’s not forget, AI is rapidly evolving and is still in its infancy! Daily applications are as plentiful as the potential risks they may pose. In this context, the EU innovates by seeking to mitigate these risks without hindering the development of artificial intelligence.
The TAoMA Team remains at your disposal for any inquiries on this subject!
Juliette Danjean
Trainee lawyer
Jean-Charles Nicollet
European Trademark & Design Attorney – Partner
(1) Extraordinary meeting of the European Council on the 1st and 2nd of October, 2020.
(2) See article 5, Title II of the Regulation’s latest iteration.
(3) See Title IX’s singular article that encourages AI system providers to maintain codes of conduct. This initiative seeks to inspire providers of AI systems, which are not deemed high-risk, to voluntarily adopt standards designated for high-risk systems as outlined in Title III.
(4) The text currently lists specific high-risk AI systems, such as those used for biometric identification and the classification of individuals, along with systems managing critical infrastructure operations (including traffic, water, gas, heating, and electricity supply).
(5) See article 5, Title II of the Regulation’s latest iteration.
(6) See Title IV of the Regulation’s latest version.
(7) The European Parliament defines “deep fakes” as the result of AI-driven media manipulation, described as hyper realistic alterations to reality. (Accessible at: https://multimedia.europarl.europa.eu/fr/audio/deepfake-it_EPBL2102202201_EN)
(8) Recitals of the European Commission’s Proposed Regulation (1; 1.1)
(9) See Title V of the Regulation’s latest version.
05
December
2022
NFT’s battle: 1-0 for Juventus!
There is no need to introduce Juventus anymore, even for those who don’t know anything about soccer!
If Juve, for the intimate, fights every day on the soccer fields, it is not left behind when it comes to defending its rights in court, and with some success.
In one of the first decisions in the European Union in this area, Juve won outright against digital playing cards authenticated by NFT.
Short summary of the competition
In 2021, Blockeras s.r.l. has obtained the agreement of various active and retired footballers to launch the Coin Of Champion project, which consists of the production of playing cards bearing their likeness and authenticated by NFT.
One of the cards represented the former center forward Bobo VIERI wearing his old Juve jersey.
In 2022, Blockeras launches the marketing of its cards driving the Juventus attack.
Indeed, the latter is the owner of numerous trademarks including the word marks JUVE, JUVENTUS and a figurative mark representing its famous jersey with black and white stripes bearing 2 stars.
Juve discovered the production (mintage), advertising and sale of the cards authenticated by NFT containing its trademarks without its authorization, so it brought an action before the Court of First Instance of Rome in the context of a “preliminary injunction”. It considers that these cards constitute acts of infringement of its trademarks and unfair competition.
In its defense, Blockeras argues, among other things, that the trademarks invoked were not registered for downloadable virtual goods!
Scoreboard
The Court of First Instance of Rome notes that the trademarks concern the most successful Italian soccer team that has won the most competitions.
In addition, Juve has a widespread merchandising activity in different sectors (clothing, games, etc.) both on the web and in physical stores in different Italian cities.
Thus, the use of the image of Bobo VIERI, wearing his Juve jersey, entails a use of trademarks without authorization from Juventus. This is purely commercial purposes and the authorization of Bobo VIERI to use his image wearing his Juve jersey should also have been requested from the famous soccer club since the reputation of its trademarks contributes to the value of the digital card authenticated by NFT.
As for Blockeras’ argument that the marks are not protected in class 9 for virtual goods, the court dismisses it out of hand. In fact, it noted that the marks designate different products, notably in class 9, which are related to “downloadable electronic publications”.
In addition, it states that Juve is active in the world of crypto games, crypto currencies and NFTs especially through agreements with the French company Sorare.
It therefore concludes that the creation and marketing of the digital cards by Blockeras infringes Juve’s trademarks.
Comment (non-sporting)
Last June, the European Intellectual Property Office (EUIPO) published its “guidelines” for NFTs in which it considers that they fall under class 9 “because they are treated as digital content or images”. This raises a presumption that trademarks for physical goods must also be registered for virtual goods if their owners wish to be protected for the latter.
This decision of the Court of First Instance of Rome also seems to point in this direction since it recognizes the similarity between the virtual cards authenticated by NFT with the “downloadable electronic publications” to which products designated by the Juventus trademarks are linked. It is true that the court also considers the marked activity of Juve in the field of crypto games and crypto currencies to reinforce the risk of confusion in the public mind.
Nevertheless, if Juve’s trademarks had not designated products related to downloadable electronic publications, we can wonder if the court would have had the same reasoning despite the soccer club’s activity in these new technologies.
Given the current uncertainties related to NFTs, it is therefore strongly recommended to extend the protection of one’s trademarks to virtual products, at least as a precaution.
Please feel free to contact us to discuss and set up a branding strategy adapted to your needs!
Jean-Charles Nicollet
European Trademark and Design Attorney
20
October
2022
Between scam and self-regulation: the paradox of the NFT world
The speculation around some NFTs creates envy. Thus, 5 NFT from the famous Bored Apes collection have been stolen for a loss of around 2.5 million dollars. One of the NFTs is valued at over 1 million dollars.
A few days ago, several people were indicted in Paris for these facts qualified as gang fraud, money laundering and conspiring.
The case started with a proposal to “upgrade” the famous NFT to an animated gif. A phishing website was part of the means.
It ended with a tricked out smart contract…which gave access to the NFTs. And the trick was done.
The process is particularly commonplace: phishing, known in the jargon as “phishing scam”. But the means are reserved for an elite group of hackers, perfectly familiar with the Blockchain. Who are also very young…
However, this ecosystem is also self-regulating. It is historically based on strong values of mutual aid and disinterested cooperation on the part of members. Some go so far as to make it their mission to ensure transparency and denounce shady transactions.
That’s where an anonymous, but famous investigator in the Blockchain world comes in: ZachXBT.
Zach tracks, analyzes transactions as they are visible in the Blockchain, traces cryptocurrency flows and makes cross-checks allowing the identification of cybercriminals.
His twitter account is very followed.
He has already offered his help for famous scams.
ZachXBT states on his Grants crowdfunding page, that after being a victim himself, he decided to document shady deals in order to educate and increase transparency in this space.
He also exposes influencers who abuse their influence to push the public into opaque, even dishonest transactions.
In this case of the Bored Apes he brought a determining help to the OCLCTIC (Central Office of Fight against the Crime linked to the Technologies of the Information and the Communication).
Anne Messas
Attorney-at-Law
12
August
2022
Minecraft refuses NFTs in the name of inclusion
While lawyers and economic actors are struggling to adapt to the NFT and metaverse market, young people are delighted because one game publisher, and not the least, has said no.
In its press release of July 20, 2022, the publisher of Minecraft, the most downloaded video game in the world, takes a stand against the logic of speculation, rarity and exclusion, which, according to him, is conveyed by the current use of NFTs.
Publisher Mojang Studios, owned since 2014 by Microsoft, stands firmly against the integration of NFTs into the game Minecraft, in the name of values of equal access to game content and creative inclusion.
“NFTs are not inclusive of all our community and create a scenario of the haves and the have-nots. The speculative pricing and investment mentality around NFTs takes the focus away from playing the game and encourages profiteering, which we think is inconsistent with the long-term joy and success of our players.”
This decision is in line with the mindset of many gamers who reject NFTs by associating them with a world of speculation but also because of the serious and currently unchecked consequences of blockchain on the environment.
The world of Web 3.0 never ceases to provoke debate.
TAoMA is following this very closely.
Stay tuned
Anne Messas
Attorney-at-law
27
March
2019
Article 13: should you worry?
Author:
teamtaomanews
The copyright directive, proposed by the European Commission on September 14, 2016, was just approved in a modified and final version, on March 26, 2019, by the European Parliament. We would like to review all the fears aroused by its “article 13” – now article 17 – as of the issuance of the proposal more than two years and a half ago.
Like any directive, this text will have to be transposed into national law by each of the 28 (or 27?) EU Member States and will not be applicable as such (as opposed to the regulations, such as the GDPR that did not have to be transposed). The directive only lists goals to be reached and lets the Member States decide of the ways to reach them (for example, with an obligation or not to use automatic filtering system).
This directive, all along the negotiations process, raised many concerns and we wished to make things clear about each of the fears and questions that invaded the web, in particular about its “article 13” that became article 17 in the final version but that will forever stay “Article 13”…
What’s that Article 13?
Article 13 consists in:
Ending the ISP’s protection status (“safe harbor”) of host provider extended by the case law to the platforms: the host providers’ liability was shielded in case of prompt withdrawal of litigious content, and replacing it by an a prioriliability;
Paying the authors through license agreements that are not mandatory for their beneficiaries (point 1 of the article);
When the beneficiaries do not want to enter into a global license agreement, compelling the platforms to cooperate and thus to withdraw the litigious content (point 4);
When no authorization was granted by the beneficiaries, making the host providers liable for the litigious content unless they show that they did everything to obtain the authorization; made their best efforts to make the copyrighted content unavailable; promptly withdrew or deny access to the content, after receiving a notification from the beneficiaries. Principle of proportionality will determine whether the host providers complied with their duty to cooperate namely in view of the number of visitors, the size of the platform, the type of uploaded material, the efficiency of the means and their cost for the host provider;
Allowing the users to appeal the withdrawal of their content through a system of internal board of appeals (point 8);
Compelling the platforms to install automatic filtering systems, except for platforms of less than 3 years making an annual turnover of less than 10 million euros. When these platforms gather over 5 million users, they also have to show that they made their best efforts to prevent the uploading of copyrighted contents for which the authors had made specific requests (point 4aa).
Is it going to make SMEs’ life more complex, as GDPR did?
The European Parliament reduced the SMEs’ burden that the Commission initially put on their shoulders. They will not have to install systems of automatic filtering even when license agreements are negotiated with the beneficiaries (point 7). However, the smallest platforms will have no choice but to accept signing such license agreements.
Will I be taking risks when showing trademarks in my videos?
No, nothing changes in terms of trademarks. The directive only affects copyright, not trademark law. It does not change anything to the fact that it is already possible, without being held liable, to show, voluntarily or not, an item bearing a protected trademark in the uploaded content. It is for example the case of this famous spirit trademark in a video of Liza Koshy. This trademark is not used “in the course of trade”: using it is not infringing the attached rights.
What about copyrighted works? Is it going to be forbidden to show them, even on a corner of the screen and for less than one second?
Even though the European Commission had not drafted it in its proposal, the European Parliament added to the adopted version of the directive that copyright exceptions would prevent the directive from applying. Consequently, it will still be possible, as it is now, to:
Reproduce a short extract of a copyrighted work to illustrate a general speech (quoting a sentence from a novel, showing the excerpt of a theater show, using a few seconds from a song…) as done for example by youtuber Mark Edward Fischbach in this video showing a few seconds of The Simpsons;
Include a protected work as part of a larger whole such as the Louvre Pyramid by Ieoh Ming Pei (an architectural work still under protection) in a video about Paris or a Walt Disney character on the wall of a youtuber’s bedroom, as long as these works are not the principal object of the video (exception of “accidental inclusion” or of “panorama”);
Parodying a work to make fun of it, use it to make fun of something else or produce a humoristic content, as does this video using an excerpt from the movie “Downfall” combined with a speech of Barack Obama before the Congress;
And of course, it will still be possible to use freely works belonging to the public domain.
These exceptions depend on national legislations and might defer from one Member State to another.
I heard that article 13 means the return of censorship…
As long as copyright exceptions are preserved, the directive will never have the effect to “censor” the uploaders. Monitoring systems of copyrighted works might become more efficient and give rise to more automatic withdrawals. But these withdrawals will be justified by copyright infringements. Copyright is, surely, a limitation to freedom of speech as it allows to forbid the uploading of a copyrighted movie without the authorization of its beneficiaries, for example. But it is not accurate to refer to it with the word “censorship” because this limitation to freedom of speech is based in the law and in fundamental and constitutional rights in all EU Member States, namely by Article 10 of the European Convention on Human Rights. The word “censorship” implies the idea of an arbitrary interference which makes it inaccurate – unless the fears aroused by “Content ID” proved to be justified, see last question below.
Is it true that the memes will be banned?
The memes are parodical images including potentially copyrighted material, such as this 2016 parody of Game of Thrones relating to the transfer of power from Barack Obama to Donald Trump:
These pictures use a copyrighted work (Game of Thrones) and parody it with the addition of the presidents’ faces. On the face of it, this meme infringes the moral rights of the authors (no mention of the author’s name, unauthorized alteration of the work) as well as their economic rights (unauthorized reproduction of the work, no remuneration of the authors). However, this memeis not an infringement because the copyright exceptions protect it from a request to withdraw: the image is a parody and it is used to make fun of Donald Trump depicted as a terrifying and pitiless ruler.
Memes are not jeopardized by the directive, de jure.
But if the exceptions still apply, is anything going to change?
First, if the copyright exceptions are included in the directive, it means that the host providers will not have the overwhelming task to withdraw all existing content using protected works thanks to these exceptions!
What will change is most of all the way to pay the beneficiaries. They will be paid through license agreements or on a case-by-case basis, by the host provider and by the uploader while, until now, only the latter was paying the authors for the works used in the uploaded content, based on a share of the advertising revenues that the host provider was paying to the uploaders. This is one of the reasons why the host providers were lobbying against article 13.
Then, it is true that the changes might not be visible to all the users. Thus, the host providers already set up, these last years and namely in France, filtering systems to prevent the upload of protected content. For example, through the monitoring of key words but also through robots. They also withdraw the content that is notified as infringing, outside of any license agreement. Finally, they already started creating procedures of appeals against withdrawals. For example, YouTube (property of Google), offers an online procedure of copyright takedown notice, but also a procedure of appeal against the withdrawal. YouTube also set up a form to appeal a “Content ID” claim and even a possibility to appeal the confirmation of withdrawal after the appeal against the Content ID claim (same link). YouTube anticipated the implementation of article 13.
Experienced users might already be circumventing the automatic filtering systems when uploading their content: the battle of technology is not fought by the bulk of the troops of the users, but it is, as always, way ahead of the legal norms.
How does this automatic filtering robot “Content ID” work?
This robot is able to understand that a YouTube user is trying to upload a movie or part of a movie for which its beneficiaries requested the protection. Thus, if a YouTube user tries to upload on his or her channel the last Warner or Universal movie release and that these companies made a Content ID request, the uploading is automatically blocked. The same problem will happen if the uploader only includes an excerpt from the movie in the video.
The problem is the coexistence of this automatic filtering system and the copyright exceptions (parody and citation). Indeed, the beneficiaries have the possibility to set a default authorized duration of excerpts. This is why, precisely, procedures of appeal against Content ID claims were set up. But since these procedures are most of the time in favor of the beneficiaries, the uploader has no other option than referring to judicial courts to obtain a decision in favor of his or her freedom of speech.
It is of course impossible for this robot to monitor all musical and audiovisual copyrighted works (movies, TV shows, video clips, etc.). Thus, the automatic filtering will only apply to a small selection of copyrighted works. The host providers’ major fear is that the automatic filtering could be imposed in the frame of license agreements as a performance obligation and not as a best-efforts obligation: in the first case, they would be liable if the robots do not detect the unauthorized use of a copyrighted work; in the second, they would not be liable if they show that they made their best efforts to enforce the automatic filtering with their technological tools and human means.
Gaëlle Loinger-Benamran
Partner
European Trademark and Design Attorney
and
Jérémie Leroy-Ringuet
Attorney at Law