What is the EU AI Act?
The EU AI Act represents a milestone in European regulation of Artificial Intelligence. The aim of this legislative initiative by the European Union is to establish clear guidelines for the ethical use of AI technologies while finding a balanced approach between innovation and the protection of fundamental rights. The EU AI Act (Artificial Intelligence Act) defines criteria for high-risk AI applications, establishes transparency requirements, and provides for enhanced oversight to ensure that AI systems comply with European values and are safe for society.
It is the world's first legal framework for the development, publication, and use of artificial intelligence. The European Commission presented its first proposals on April 21, 2021, and after lengthy negotiations, the European Parliament and the European Council have agreed on legislation to regulate artificial intelligence.
What does the EU AI Act include?
Specifically, the EU AI Act includes the following aspects:
- Harmonized regulations for the placing on the market, commissioning, and use of artificial intelligence systems in the EU.
- Prohibitions of certain practices in the field of artificial intelligence.
- Specific requirements for high-risk AI systems and obligations for operators of such systems. These include, for example, documentation obligations or compliance with quality criteria for training, validation, and test datasets.
- Harmonized transparency rules for certain AI systems, such as clear representation of how such systems work or the provision of instructions for use.
- Rules for market surveillance, market monitoring, and governance. A European Committee for Artificial Intelligence will be established for this purpose.
- Measures to support innovation, such as the establishment of testing environments for the development, training, testing, and validation of innovative AI systems by national authorities.
Fundamentally, the legislation provides for prohibiting artificial intelligence in four cases. These include:
- AI systems that influence human behavior in such a way that physical or psychological harm is caused to them or others.
- AI applications that exploit the vulnerabilities of certain people, for example, due to their age, disability, or special social or economic situation.
- AI systems to evaluate or classify people over a certain period based on their social behavior or known or predicted personal characteristics (Social Scoring).
- Remote biometric identification systems in publicly accessible places by or on behalf of law enforcement authorities for the purpose of law enforcement, unless such use is significant for national security.
In addition to these prohibitions, systems based on artificial intelligence are generally categorized as follows:
- Unacceptable risk: AI applications falling into this category are prohibited. These include, as already mentioned, AI systems for influencing human behavior, systems for real-time detection of biometric features and faces in public spaces, and systems for social scoring.
- High-risk: AI applications that pose significant risks to the health, safety, or fundamental rights of individuals are subject to obligations regarding quality, transparency, human control, and security. This applies in particular to AI systems used in the fields of health, education, recruitment, management of critical infrastructure, law enforcement, or justice.
- General-purpose AI: This category includes foundation models like ChatGPT in particular. They are subject to transparency requirements. Highly effective general AI systems that may pose systemic risks must also undergo an in-depth evaluation process.
- Limited risk: Such systems are subject to transparency obligations to inform users that they are interacting with an AI system and that the system acts according to their choice. This category includes, for example, AI applications that can generate or manipulate images, sounds, or videos. In this category, open-source models are not regulated with few exceptions.
- Minimal risk: These include, for example, AI systems used in video games or spam filters. They are not regulated, and member states cannot regulate them beyond maximum harmonization. This means that the wording of national law may not be broader than that of EU law. Existing national regulations on the design or use of such systems are largely repealed. However, a voluntary code of conduct is proposed.

Why is the EU AI Act important?
The regulation of artificial intelligence is important for several reasons, as it helps to minimize potential risks and promote responsible use of this advanced technology.
1. Protection of EU citizens from risks
A crucial point that underscores the importance of the EU AI Act is the protection of citizens from potential risks associated with the use of AI. The regulation is designed to ensure that AI systems function safely and transparently without violating fundamental human rights. By incorporating ethical principles into the development and use of AI, it aims to prevent automated systems from making discriminatory or unethical decisions.
2. Uniform rules within the EU
Furthermore, the EU AI Act establishes uniform standards within the European Union, which is crucial for competitiveness and the smooth cross-border deployment of AI technologies. The regulation thus creates a clear framework for companies, research institutions, and governments to develop and deploy AI responsibly.
3. Introduction of data protection standards
The processing of large amounts of data by AI applications is a key component of their functionality. This data can include personal and sensitive data, which underscores the need for effective regulation. The EU AI Act plays a crucial role in this context by ensuring that the processing of personal data by AI systems meets the highest data protection standards.
4. Minimizing risks of cyber attacks
The rapid development of artificial intelligence brings not only great opportunities but also significant security risks. These risks include, in particular, the threat of cyber attacks and the misuse of AI technologies. The EU AI Act plays a crucial role in this context as it aims to promote the development and implementation of secure AI systems to minimize potential risks.
5. Building public trust
Clear and comprehensive regulation, as envisioned in the EU AI Act, plays a crucial role in building trust in artificial intelligence. This trust-building extends to various stakeholders, including the public, users, and businesses. Responsible use of AI, supported by clear guidelines and ethical standards, contributes significantly to the acceptance of this advanced technology.

When will the EU AI Act come into force?
In 2018, an expert group of the EU drafted a comprehensive white paper on artificial intelligence. Building on this paper, in 2019 it was announced that the EU intends to regulate AI. The first draft of this regulation was prepared in April 2021. By the end of 2022, the European Parliament had met and considered amendment proposals from member states and various committees.
In June 2023, the final draft law was formulated, and now the trilogue negotiations between the EU legislative bodies can begin. For this purpose, the EU Commission, the EU Parliament (EP), and the Council of Ministers will come together with the aim of reaching a swift agreement under the mediation of the Commission between the EP and the Council of Ministers for the final law. The implementation of the law is planned for the next two years, with entry into force in 2026. Companies affected by the scope of the law thus have, at the time of writing this report, about two years to prepare comprehensively for the AI law.
What are the advantages of the EU AI Act?
The regulation of artificial intelligence brings various advantages aimed at considering ethical, legal, and social aspects of AI use.
The EU AI Act creates more transparency for AI systems. This ensures that decision-making processes of the systems are traceable and the use is clearly documented. In addition, clear rules for responsibility are established to clarify liability in case of malfunctions or damages. Companies thus receive a clear framework for the safe use of artificial intelligence.
This framework refers not only to transparency rules and legally uniform rules but also to the promotion of innovation. This also enables companies to invest responsibly in AI research and development.
Additionally, the EU AI Act not only plays an important role in regulating AI within the European Union but could also serve as a pioneering model for other countries facing the challenge of regulating AI. For example, the United States is currently working on a comparable regulation at an earlier stage.
What are the disadvantages of the EU AI Act?
The EU AI Act applies only within the European Community. Developers of AI-based systems can publish and operate prohibited software outside its scope. Protection against risky AI is thus not guaranteed outside the EU.
Furthermore, biometric remote monitoring and facial recognition software can be used for national security. Through this backdoor, governments can misuse such applications.
The definition of systems classified as artificial intelligence is also inadequate. Less complex software can still cause harm that is not limited by the EU AI Act.
In addition to these general disadvantages, the EU AI Act brings further disadvantages for companies and organizations. To implement the new regulations, companies must invest money. Especially for small and medium-sized enterprises, this can lead to a higher cost burden.
Furthermore, the restriction of artificial intelligence slows down the development speed of new AI applications, as companies must intensively deal with the regulations and comply with them.
Companies and organizations must also be aware that they are exposed to a higher liability risk through regulation and can be held liable.
Other Important Regulations for Digital Services
In addition to the EU AI Act, there are other important regulations developed by the EU to address the new conditions of digital transformation. These include the Digital Markets Act (DMA), the Digital Service Act (DSA), and the Data Act.
Digital Markets Act (DMA)
The Digital Markets Act, together with the Digital Service Act, is a cornerstone of the European Union's current digital strategy. The law came into force as a regulation on November 1, 2022, and has been mandatory since May 2, 2023.
The aim of the DMA is to establish clear rules for large, systemic, and market-dominant online platforms, so-called "gatekeepers." This is intended to increase user choice, strengthen competition, and make the market fairer overall.
The following criteria must be met to be considered a gatekeeper:
- The company has a strong economic position with significant influence on the EU internal market and is active in at least three countries of the European Union.
- The online platform has a strong intermediary role, i.e., it connects many users with a variety of companies.
- The company's position in the market is established and permanent.
Companies such as Alphabet (Google), Amazon, Apple, ByteDance (TikTok), Meta, and Microsoft, for example, meet all these criteria and are therefore considered gatekeepers.
Since May 2, 2023, gatekeepers are subject to the following regulations, among others:
- Gatekeepers must allow third parties to work with their platform in certain situations.
- They must ensure that their users can access the data they generate on the platform.
- They must not prevent consumers from uninstalling pre-installed apps.
- The platforms must not favor their own products and services over those of third parties in terms of ranking.
- Gatekeepers may not track users outside the central platform service without consent for the purpose of targeted advertising.
A complete overview of all rules of conduct can be found on the website of the European Commission.
Violations of the Digital Markets Act can result in fines, penalty payments, or other remedial measures.
Digital Service Act (DSA)
The Digital Service Act (DSA) came into force on November 16, 2022, and has been mandatory in all EU member states since May 14, 2024.
These are regulations for providers of digital services intended to ensure that users' fundamental rights on the internet, such as freedom of speech, are better protected and that there is more legal certainty in the online environment. The DSA is intended to make it easier to remove illegal content, which includes hate speech or counterfeit products offered for sale.
The regulations of the Digital Services Act apply to all online providers that make their digital services available in the EU internal market - regardless of whether they are established in the EU or not. These include internet providers, domain name registrars, hosting services such as cloud and web hosting services, online marketplaces, app stores, collaborative economy platforms, and social media platforms. Special due diligence obligations apply to very large providers with more than 45 million active monthly users.
Conclusion
Overall, the EU AI Act represents an important step in regulating artificial intelligence in the European Union. The regulation is a milestone that aims to responsibly exploit the potential of AI while focusing on ethical principles and the protection of society. The creation of clear standards that promote transparency, security, and ethical use of AI are the advantages of the EU AI Act. The harmonization of rules within the EU strengthens competitiveness and facilitates the cross-border use of AI technologies.
However, challenges and possible disadvantages must also be considered. Despite regulation, governments could misuse AI with the argument of national security and use it for surveillance purposes. Overall, the EU AI Act exists in the tension between the need for security, ethics, and transparency on the one hand, and the promotion of innovation and competitiveness on the other.
Retry
Claude can make mistakes.
Please double-check responses.