EU AI Act: Legal Implications and National Implementation of the EU AI Act
Introduction
Legal compliance with the EU AI Act enhances rights protection while posing new challenges for businesses and governments. This article focuses on legal implications, transparency, data protection, and national adaptations.
Legal Issues
AI tools must be evaluated for their reliability, validity, and impartiality. Specifically, we need to reach a point where we can understand how the AI model works, explain in an understandable way what happens when the AI model makes decisions, and finally, if AI causes any harm/damage, we need to know why.
According to the European Commission, for AI to be considered trustworthy, three components are required:
- It must be lawful, ensuring compliance with all applicable legislative and regulatory provisions
- It must be ethical, ensuring compliance with ethical principles and values
- It must be robust, both from a technical and social perspective, to ensure that, even with good intentions, AI systems do not cause unintentional harm
Responsibility and Transparency
One of the most significant legal changes concerns the transparency and responsibility of AI providers. The regulation requires high-risk system providers to maintain records, document their procedures, and ensure their systems operate without discrimination.
Compliance requires:
- Conducting risk analysis and impact assessment
- Creating mechanisms to prevent bias
- Providing clear information to users about system operation
Legislation must ensure that algorithms are transparent and fair. Some machine learning techniques, although very successful in terms of accuracy, are not at all illuminating in terms of understanding how they make decisions. The concept of "black-box AI" refers to precisely these scenarios, where it's impossible to identify the reasons behind certain decisions. In contrast, explainability is the property of AI systems that can provide some kind of explanation for their actions. The use of "black boxes" can lead to questionable or even wrong decisions. All current AI systems are goal-directed, meaning they receive specifications from humans about the goal they need to achieve and use certain techniques to achieve it. They don't set their own goals. However, some AI systems (such as those based on certain machine learning techniques) may have more freedom to decide which path to take to achieve the given goal.
Here, the concept of digital literacy becomes particularly important for citizens to be able to understand the basic characteristics of AI applications’ operation, while maintaining a degree of control over automated decision-making. This control includes the right to object to automated decisions, which must be evaluated by a human. The requirement for transparency and accountability is important to ensure decisions are objective and unbiased. However, we believe that maintaining human responsibility and accountability for decisions made by AI systems, even when those responsible don’t have complete technical understanding of how these systems work, is crucial for protecting fundamental rights and ensuring social acceptance of AI.
Data Protection and Privacy
The intersection of AI regulation with data protection law creates new obligations for organizations. Companies must:
- Ensure compliance with GDPR principles
- Implement privacy by design
- Conduct data protection impact assessments
Cross-border Challenges
The implementation of the AI Act creates challenges for cooperation with third countries. Companies outside the EU wishing to operate in the European market must comply with the regulation’s provisions, increasing legal costs and complicating cross-border transactions.
The Greek Framework
These directions become particularly important when the national legislator needs to specify the risk levels provided by the new Regulation, and therefore evaluate specific applications based on these by the competent authorities. In our national law, Law 4961/2022 (“Emerging Information and Communication Technologies, Enhancement of Digital Governance and other provisions”) has already been established, certain provisions of which establish a first outline of substantive regulations (especially regarding transparency obligations for entities using AI systems and obligations of AI system contractors).
However, the national legislator needs to incorporate the provisions of the EU AI Regulation and address any areas where there is discretion for domestic regulation, according to the proposed governance architecture. This is particularly important given that EU AI law may have certain structural limitations regarding the protection of fundamental rights. This new harmonized national legislation should deal with the implementation and specification of tools provided in European legislation, such as, to name a few, algorithmic impact assessment, fundamental rights impact assessment, codes of conduct, transparency registers, controlled testing environments, and safe harbors. This can be achieved by establishing regulatory frameworks and developing comprehensive regulations that set clear guidelines for the ethical use of AI regarding information dissemination. It needs to include the incorporation of relevant provisions of the EU AI Regulation, which must be developed and adapted to Greek law and the Greek reality of journalism and the information sphere in general, aiming to improve transparency, accountability, and validity of AI-generated content.
Greece has the opportunity to differentiate itself by positioning itself at the “more business-friendly” end of the domestic regulatory policy spectrum. It also has the opportunity to distinguish itself by implementing the EU AI Regulation’s provisions quickly and in a way that will provide greater clarity, simplicity, and flexibility to companies wishing to operate nationally. Local regulations must be consistent with other relevant national and European legislation, such as the Digital Services Act and its implementing law, the Data Governance Regulation, the EU Data Act, the proposed AI Liability Directive and proposed revisions to the Product Liability Directive, as well as the General Data Protection Regulation. Finally, Greece must consider the timely ratification of the Council of Europe’s first legally binding international treaty on AI, Human Rights, Democracy, and the Rule of Law, ensuring citizen protection, transparency, accountability, and legal certainty, while supporting the economic prospects of startups and SMEs, accelerating their market entry and increasing their national and European competitiveness.
Conclusions
The EU AI Act marks a milestone for artificial intelligence legislation and sets high standards that may influence international legislation. Although it creates compliance challenges, it paves the way for safer and more ethical use of AI. Its success, however, will depend on member states’ ability to effectively implement its provisions and cooperation between public and private sectors.