EU AI Act 101 – Everthing that you should know about it

What is the EU AI Act, and why is it important?

The EU AI Act is an European regulation laying down harmonised rules on AI. In particular the AI Act sets out rules on the following matters:

  • Harmonised rules for placing on the market, putting into service, and using AI systems in the Union;
  • Prohibitions of certain AI practices;
  • Specific requirements for high-risk AI systems and obligations for operators of such systems;
  • Harmonized transparency rules for certain AI systems and the placing on the market of general-purpose AI models;
  • Market monitoring, market surveillance, governance and enforcement; and
  • Measures to support innovation, with a particular focus on SMEs, including start-ups.

The EU AI Act’s main goal is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy AI, while ensuring the protection of health, safety and fundamental rights. The EU AI Act is key to ensure AI is ethical and non-discriminatory, ensuring a balance between the growth of AI in the EU, while mitigating its risks.

What are the key compliance requirements under the AI Act?

The EU AI Act sets out risk-based requirements for AI systems and, therefore, applicable compliance requirements depend on the risk level of the AI system.

Specifically, high risk AI systems must comply with the following requirements:

  • Risk management system: Entities shall establish implement, document and maintain a risk management system;
  • Data governance: Entities shall develop a data governance and management practices, in order to ensure que quality and unbiased of data used;
  • Technical documentation: Entities shall develop technical documentation in respect of the AI system to demonstrate that it complies with the applicable requirements;
  • Record-keeping: The AI system shall technically allow for the automatic recording of events (logs) over the lifetime of the system;
  • Transparency: The AI system shall be designed and developed as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately;
  • Human oversight: AI systems shall be designed and developed as they can be effectively overseen by natural persons during the period in which they are in use;
  • Accuracy, robustness and cybersecurity: AI systems shall be designed and developed with an appropriate level of accuracy, robustness and cybersecurity.

Additionally, all AI systems intended to interact directly with natural persons shall be designed in such a way that the natural persons concerned are informed that they are interacting with an AI system.

How does the AI Act compare to AI regulations in the UK, US, and other regions?

The EU AI Act is a general framework on AI matters, which sets out mandatory compliance requirements for all companies developing, deploying or using AI systems in the EU, regardless of the sector they operate in. The UK and the US do not have a similar general AI framework and have a fragmented approach. In the UK, AI matters are mostly regulated in sectorial legislation and regulations. As for the US, the AI regulation is a state-level issue.

How do AI regulations impact finance, banking, and central banking?

AI solutions may be used in a wide range of services related to these sectors, such as AML and fraud detection, credit risk assessment, trading, customer service, regulatory compliance and risk management. AI regulations set out the criteria under which AI systems may be used in this context, ensuring that regardless of the use of AI, these services remain transparent, unbiased, and secure.

What does Article 4 of the AI Act require?

Article 4 of the EU AI Act sets out AI literacy as a requirement for providers and deployers of AI systems, setting out that these persons shall take measures to ensure a sufficient level of AI literacy of their staff and other persons involved with the AI systems that they provide or deploy. Adequate AI literacy depends on multiple factors: technical knowledge, experience, education, training and purpose of the AI system.

Exact requirements for AI literacy are context-specific and covered entities need ensure implementation of such requirements “to their best extent”. This means that such measures will depend on several factor, such as:

  • Relevant AI system: AI literacy standard will be higher on high risk AI systems than for low risk AI systems;
  • Size and resources of organizations: The standard of AI literacy is likely to increase as the size and resources of the relevant entities also increases;
  • Sector: AI literacy requirements should be more stringent on those sector which are identified in the AI Act and high risk (e.g. increases, and the
  • Work force: The level of AI literacy of each employee must be aligned with the context in which AI systems are used and how people may be affected by such systems.

Who do the AI literacy obligations apply to?

AI literacy obligations apply to providers and deployers of AI systems.

What is the difference between an AI provider and an AI deployer?

An AI provider is a person or entity that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge. An AI deployer is a person or entity that uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

Does the AI Act apply to companies outside the EU?

Yes, the AI Act applies to companies outside the EU in the following cases:

  • Non-EU providers placing AI systems on the EU market or putting the into service in the Union or placing general-purpose AI models on the EU market;
  • Non-EU providers and deployers of AI systems, where the output produced by the AI system is used in the Union.

As a general rule, regardless of whether companies providing or deploying AI systems are based in the EU or not, the AI Act shall apply when persons or entities located in the EU are affected by the AI systems provided or deployed.

Who within an organization needs to have AI literacy?

Organizations shall ensure that all their staff and other persons dealing with the operation and use of AI systems on their behalf have AI literacy.

What should an AI literacy programme include for different roles?

Exact requirements for AI literacy programs are context-specific. This means that the specific content of such programs will depend on several factors, such as, (a) the relevant AI system, since AI literacy standards should be higher on high risk AI systems (i.e., systems used in sectors such as biometrics, education, employment, law enforcement, administration of justice), (b) size of organizations, as AI literacy standards should increase as the size and resources of the relevant entities also increases, and (a) the work force, as the level of AI literacy of each employee or person related to the AI system must be aligned with the context in which AI systems are used by such employee and tailormade to its role. Therefore, there is no one-size-fits-all approach for AI literacy programs. Organizations should consider starting by assessing the current AI knowledge of their staff, particularly those working directly with AI systems. After identifying any gaps, role-specific training programs need to be implemented to ensure that employees understand the technical, ethical, and risk-related aspects of AI. This training should be regularly updated to reflect new developments and regulatory changes.

When does the EU AI Act come into force?

The EU AI Act entered into force on 2 August 2024.

When must organizations start complying with AI literacy requirements?

Organizations should be complying with AI literacy requirements since 2 February 2025 – on this date, all employees and other persons operating and using AI systems on their behalf should possess the necessary knowledge to use AI systems responsibly.

What penalties exist for failing to meet AI literacy requirements?

The penalties applicable to the non-compliance with AI literacy requirements will depend from member state to member state, since the AI Act does not set out a specific penalty applicable to non-compliance with AI literacy requirements. In fact, the AI Act generally sets out that Member States shall lay down the rules on penalties and other enforcement measures applicable to non-compliance with the AI Act, which shall be effective, proportionate and dissuasive. In this sense, penalties for non-compliance with AI literacy requirements are not yet set in stone, but it is resonate to anticipate that in the future these may range from warnings to administrative fines, including also typically ancillary penalties such as the suspension of the activity for a certain period of time, the revocation of the business license.

Can companies face fines for misleading regulators about AI literacy efforts?

Providing incorrect, incomplete or misleading information to competent authorities may entail the in reply to a request shall be subject to administrative fines of up to € 7,500,000 or 1% of the total worldwide annual turnover for the preceding financial year, whichever is higher (or lower, in the case of SMEs).

How can AI literacy help financial institutions stay ahead?

AI literacy is one of the key steps to ensure adequate use of AI based services and solutions. This will help financial institutions better identify in which areas of their activities they may incorporate such services, ultimately, increasing the effectiveness and quality of the services provided and products offered, not jeopardising the quality nor security of the organisation.

How does AI literacy contribute to competitive advantage?

Ensuring employees have adequate AI literacy means allows companies to innovate faster, with the incorporation of AI-based solutions in their internal procedures and/or launching AI-driven products and services, gaining a competitive advantage over traditional competitors.

How does AI literacy improve risk management and compliance?

There are multiple AI-driven systems that may be implemented by companies in the context of their risk management system as well as their compliance areas. AI literacy allows companies to implement AI driven solutions adequately, reducing the risk of AI driven errors. AI-literate risk managers and compliance officers will help companies:

  • Better understand AI based fraud detection algorithms, being able to detect false positives and negatives and, therefore, refining risk management models;
  • Interpret AI forecasts accurately, detecting any anomalies or flaws in the algorithm and avoiding financial and operational risks;
  • Audit AI based models for bias, ensuring fairness in credit scoring and fraud detection;
  • Justify AI based systems and decisions to regulators; and
  • Align their internal policies and AI systems with regulatory requirements.

What are the risks of not investing in AI literacy?

Investment in AI literacy for its staff is a mandatory requirement applicable to all companies using AI-driven solutions. Therefore, non-compliance with such AI literacy requirements may entail the application of fines and other regulatory ancillary penalties. Additionally, lack of investment in AI literacy increases the probability of occurrence of AI driven errors due to the inadequate use of AI solutions or interpretation of AI generated content. The consequences of these type of errors can be vast, ranging from financial losses, to security issues and reputational damages.

Learn the skills of Fintech

Learn the skills of Fintech

More To Explore

Frequently Asked Questions

The Centre for Finance, Technology & Entrepreneurship (CFTE) is a global education platform that aims to equip financial professionals and organisations with the necessary skills to remain competitive in a rapidly changing industry. Our leading training programmes, curated by global industry experts, help talent build skills to join the digital revolution in finance. CFTE’s courses are globally recognised with accreditations from ACT, IBF, CPD, SkillsFuture and ABS.

At CFTE, our mission resonates with every learner’s goals to rapidly advance in their career, to thrive in their next project or even to lead the disruption in finance with their own venture. To help you do this, CFTE gives you the tools you need to master the right skills in digital finance. We bring exclusive insights from leaders that are steering the developments in the financial sphere from global CEOs to disruptive entrepreneurs. With CFTE you don’t just learn what’s in the books, you live the experience by grasping real-world applications.

If you are looking for rich insights into how the Financial Technology arena is transforming from within, we can help you get the latest knowledge that will stir things up in your career. CFTE offers leading online programmes in digital finance, covering an expanse of topics like – Payments, AI, Open Banking, Platforms, Fintech, Intrapreneurship and more, that will help you conquer the financial technology landscape. With this expertise at your disposal, you will be on track to turbocharge your career.

You will be learning from a curated line-up of industry leaders, experts, and entrepreneurs hailing from Fortune 500 companies and Tech Unicorns, among others. They will each be presenting their knowledge and experience in the field of digital finance. No matter if you are embarking on a new journey or fortifying your role, these lecturers and guest experts will guide you through the perspective of established institutions like – Starling Bank, Wells Fargo, tech giants like – Google, IBM, successful startups such as – Kabbage or Plaid, among many more!