To protect our rights, the EU AI Act must include rule of law safeguards
By Eva Simon, Advocacy Lead for Tech & Rights, and Jonathan Day, Communications Manager, Civil Liberties Union For Europe
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.
AI is a part of everyday life in countless ways, and how we choose to regulate it will shape our societies. EU lawmakers must use this opportunity to craft a law that harnesses the opportunities without undermining the protection of our rights or the rule of law, Eva Simon and Jonathan Day write.
The EU’s Artificial Intelligence Act — the world’s first comprehensive legal framework for AI — is in the final stages of negotiations before becoming law.
Now, as the last details are being agreed, European lawmakers must seize the opportunity to safeguard human rights and firmly regulate the use of artificial intelligence.
Crucially, however, the debate around the AI Act has given insufficient attention to a key feature: the act must establish a clearly defined link between artificial intelligence and the rule of law.
While the inclusion of human rights safeguards in the act has been discussed extensively, establishing a link to the rule of law is equally important.
Democracy, human rights, and the rule of law are interconnected but still individual concepts that are dependent on each other and cannot be separated without inflicting damage to society.
An opportunity to strengthen the rule of law in Europe
The principle of the rule of law is fundamental to the EU. It is a precondition for the realisation of other fundamental values and for the enjoyment of human rights.
Notoriously hard to define, the rule of law nevertheless encompasses a set of values that are indispensable to a democratic society: a transparent and pluralistic lawmaking process; the separation of powers and checks and balances; independent, impartial courts and the ability to access them; and non-discrimination and equality before the law.
Given AI’s increasing integration into both the private and public sectors, we need robust safeguards to protect the very foundation our Union stands on: the misuse of AI systems poses a significant threat to the rule of law and democracy.
In member states where these are teetering, regulatory loopholes could be exploited to weaken democratic institutions and processes and the rule of law.
The AI Act is an opportunity to create a robust, secure regulatory environment founded upon fundamental rights — and rule of law-based standards and safeguards.
Proper oversight for AI used in justice systems
Central to these safeguards is the inclusion of mandatory fundamental rights impact assessments.
They are included in the European Parliament’s version of the AI Act, and it is imperative that they make it into the final text of the act.
These fundamental rights impact assessments are vital to ensure that AI technologies and their deployment uphold the principles of justice, accountability, and fairness.
But going beyond, rule of law standards should be added to the impact assessments, with a structured framework to evaluate the potential risks, biases, and unintended consequences of AI deployment.
Beyond the mere identification of potential risks, they can encompass mitigation strategies, periodic reviews, and updates.
This also allows for rule of law violations stemming from the use of AI to be addressed using all the means available to the EU — for example, when they occur in criminal justice systems, many of which use AI for automated decision-making processes to limit the burden and the time pressure on judges.
But to ensure judicial independence, the right to a fair trial, and transparency, the AI used in justice systems must be subject to proper oversight and in line with the rule of law.
Risks of profiling and unlawful surveillance
Importantly, lawmakers should lay the foundation for proper rule of law protection in the AI Act by leaving out a blanket exemption for national security.
AI systems developed or used for national security purposes must fall within the scope of the act; otherwise, a member state could readily use them — such as for public surveillance or analysing human behaviour — simply by invoking the national security carve-out.
The Pegasus spyware scandal, in which journalists, human rights activists and politicians were surveilled by their own governments, demonstrates the clear need to ensure that systems developed or used for national security purposes are not exempted from the scope of the AI Act.
Furthermore, national security can mean different things across the EU depending on the laws of the member states.
Profiling citizens based on national governments’ interests would create inequality across the EU, posing an equal threat to both the rule of law and fundamental rights.
No blanket exceptions
With Polish and European Parliament elections upcoming, there is no question that AI can and will be used to target individuals with personalised messages, including to spread disinformation, with the potential of distorting otherwise fair elections.
On the other hand, AI tools will be deployed for fact-checking, blocking bots and content, and identifying troll farms as well. These techniques must be transparent to prevent misuse or abuse of power.
The need to explicitly link the rule of law within the AI Act is clear, as is the importance of mandating impact assessments that consider both fundamental rights and the rule of law — without a blanket exemption for national security uses.
Artificial intelligence is a part of everyday life in countless ways, and how we choose to regulate it will shape our societies.
EU lawmakers must use this opportunity to craft a law that harnesses the opportunities of AI without undermining the protection of our rights or the rule of law.
Eva Simon serves as Advocacy Lead for Tech & Rights, and Jonathan Day is Communications Manager at the Civil Liberties Union For Europe, a Berlin-based campaign network to strengthen the rule of law in the European Union.
At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.
Read the full article Here