Crediamo in Fair Play
Ultime Novita’

The Artificial Intelligence Act: Start of Implementation of Certain provisions, Practical Guidance for Non-EU Companies, and Regulation of AI in Serbia

Considering that the implementation of certain significant provisions of the European Union’s Artificial Intelligence Act begins in February 2025, this article provides an overview of the regulation, practical guidance for companies outside the EU to ensure compliance, as well as the current status of artificial intelligence regulation in Serbia.

The number of companies that may find this topic relevant in the following months and years, even those outside EU, seems to be the greatest than is the case with any other EU regulation, so this article will be of interest even for companies that do not have AI as primary focus, but are:

  • developing or intending to develop products incorporating AI systems for EU market – including cases where only the output of such product is to be used in EU territory. In that case, the AI Act may be especially important for their business activities if they are operating in sectors identified as high-risk AI (see the high-risk AI systems topic below);
  • using AI tools in day-to-day business activities (for example, these systems are widely used in HR and employment in general), considering the expected harmonization of Serbian legislation with the new EU regulations.

The EU Artificial Intelligence Act (“AI Act”) is the first comprehensive legislation by a major regulator to regulate artificial intelligence (“AI”). The AI Act establishes a uniform legal framework for the development, market placement, service provision, and use of AI systems.

It came into force on August 1, 2024, and its provisions are to be implemented gradually over the subsequent 36 months.

 Starting from February 2, 2025, implementation starts for:

Chapter I – General Provisions; and

Chapter II – Prohibited AI Practices.

In addition to definitions and scope of application, the General Provisions also include obligations regarding artificial intelligence literacy, stipulating the duties of providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.

Prohibited AI practices refer to artificial intelligence systems that pose an unacceptable risk, as further detailed below.

Who does the AI Act apply to

  • Providers: Entities (including public authorities and natural persons) that create or have created an AI system or general-purpose AI model, and place it on the market or put it into use under its own name or trademark, either for payment or for free.
  • Deployers: Entities (including public authorities and natural persons) that use an AI system under their authority. For instance: a public transportation agency that uses an AI-powered traffic control system to optimize traffic flow. The agency doesn’t alter the design or programming of the AI system, but is responsible for using it, ensuring its safe and fair operation.

It is crucial to note that the AI Act is also applicable providers and deployers of AI systems that have their place of establishment or are located in a third country, if the output produced by the AI system is used in the EU.

 

  • Importers: Entities established in the EU that bring AI systems from non-EU countries to the EU market.
  • Distributors: Entities that make an AI system available on the EU market, supplied by a provider or importer.
  • Authorized Representatives: Individuals or entities with a written mandate from a non-EU provider to fulfil the obligations and procedures of the AI Act on their behalf.
  • Product Manufacturers: Manufacturers that incorporate AI systems into products requiring third-party conformity assessments under current EU regulations.
  • Affected Persons: Individuals who experience direct consequences from the deployment or use of AI systems, such as data subjects, employees, or consumers.

 What qualifies as an AI system

The AI Act defines an AI system as a (1) machine-based system that is (2) designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, (3) and that, for explicit or implicit objectives (4), infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

It has been recognized that systems having these characteristics pose a threat to rights and freedoms of individuals, especially in terms of lack of traceability and explainability. These systems are rated based on the risk that they pose and classified as follows: (1) unacceptable risk; (2) high risk; (3) limited risk; and (4) minimal risk.

Classification of AI systems

  1. Unacceptable Risk: AI systems that pose unacceptable risks are systems that:
  • deploy subliminal or manipulative techniques that distort behaviour and impair decision-making, leading to significant harm;
  • exploit vulnerabilities related to age, disability, or social or economic circumstances to distort behaviour, causing significant harm;
  • evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people (“social scoring”);
  • predict criminal behaviour solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity;
  • compile facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage;
  • infer emotions in workplaces or educational institutions, except for medical or safety reasons;
  • biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation), except labelling or filtering of lawfully acquired biometric datasets or when law enforcement categorises biometric data;
  • ‘real-time’ remote biometric identification in publicly accessible spaces for law enforcement, except when: when searching for missing persons, preventing imminent threats, or identifying suspects in serious crimes.

AI systems with an unacceptable risk are prohibited.

  1. High-Risk: AI systems that pose significant risks of harm to the health, safety or fundamental rights of natural persons are considered to be high-risk.

An AI system shall be considered to be high-risk if it is a safety component of a product or is itself an independent product falling under the scope of certain EU regulations, namely in the field of machinery, toys, recreational craft, lifts, equipment for explosive atmospheres, radio equipment, pressure equipment, cableways, personal protective equipment, burning gaseous fuels, medical devices, in vitro diagnostic medical devices, civil aviation equipment, two- or three-wheel vehicles, agricultural and forestry vehicles, marine equipment, railway systems, motor vehicles and trailers.

These AI systems will always be considered as high-risk, without exceptions or the possibility to prove otherwise.

In addition to the above, the following are also considered high-risk AI systems:

  • Non-banned biometric systems: remote biometric identification (e.g., facial recognition in public spaces); biometric categorization based on sensitive attributes; emotion recognition systems.
  • Critical infrastructure: AI used in safety for managing infrastructure like transportation, energy, and water.
  • Education and vocational training: AI for student admission, learning evaluation, determining education levels, or monitoring behaviour during tests.
  • Employment and worker management: AI used for recruitment, job evaluations, task allocation, and monitoring worker performance.
  • Access to essential services: AI for public service eligibility (e.g., healthcare, social benefits); AI for credit scoring, risk assessment in insurance, or prioritizing emergency services.
  • Law enforcement: AI used for crime risk assessments, polygraphs, evidence evaluation, and profiling in criminal investigations.
  • Migration and border control: AI used in migration assessments, risk evaluations, asylum applications, and border security.
  • Justice and democratic processes: AI used to assist in legal decisions or influence election outcomes or voting behaviour.

The above listed AI systems shall not be considered as high-risk if they are intended to: (1) perform a narrow procedural task or a preparatory task to an assessment relevant for the intended use; (2) improve results of already completed human activity; (3) detect decision-making patterns and deviations, without influencing an already completed human activity.

Notwithstanding the mentioned exceptions, an AI system that performs profiling of natural persons is always considered to be high-risk.

High-risk systems are allowed, but they are subject to detailed compliance requirements aimed at mitigating the risk.

     3. Limited-Risk: AI systems with limited risk, such as chatbots and deepfakes, are regulated with specific information and transparency requirements (end users must be made aware that they are interacting with AI).

  1. Minimal-Risk: Minimal-risk AI systems (e.g. AI enabled video games and spam filters) face minimal regulation, primarily focusing on transparency.

The AI Act outlines various obligations and requirements based on the role of the entity and the risk level of the AI system. Most of these obligations fall upon the providers and deployers of high-risk AI systems, and below we highlight the most significant ones.

 Obligations of Providers of High-Risk AI Systems

Providers of high-risk AI systems must:

  • Ensure their AI systems undergo a conformity assessment to meet the requirements of the AI Act before being deployed;
  • Implement a risk management system to identify, assess, and mitigate risks throughout the lifecycle of the AI system;
  • Conduct data governance, ensuring that training, validation and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose;
  • Draw up technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance;
  • Design their high-risk AI system so that they technically allow for the automatic recording of events (logs) over the lifetime of the system;
  • Design their AI systems in such a way as to ensure that their operation is sufficiently transparent and provide instructions for use for deployers;
  • Design their AI systems in such a way that they can be effectively overseen by natural persons during the period in which they are in use;
  • Design their AI systems in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle;
  • Establish a quality management system to ensure compliance;
  • Register themselves and their AI system in the EU database.

Obligations of Deployers of High-Risk AI Systems

Deployers of high-risk AI systems must:

  • Use AI systems in accordance with the instructions for use accompanying the systems;
  • Assign qualified human oversight to monitor AI systems;
  • Monitor the operation of high-risk AI systems and report any incidents or risks to providers and authorities;
  • Maintain logs of AI system operations for at least six months;
  • Inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system.

 General Purpose AI (“GPAI”)

The AI Act defines the following regarding GPAI:

General-purpose AI model – an AI model, including AI models trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market.

General-purpose AI system – an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.

The AI Act also explains GPAI models with a systemic risk and includes obligations that must be met by providers of GPAI models, including those with a systemic risk.

Penalties under the AI Act

  • Prohibition Violations: Fines of up to €35 million or 7% of total worldwide annual turnover from the preceding financial year, whichever is higher.
  • High-Risk System Non-Compliance: Fines of up to €15 million or 3% of total worldwide annual turnover from the preceding financial year, whichever is higher.
  • Incorrect Information Supply: Fines of up to €7.5 million or 1.5% of total worldwide annual turnover from the preceding financial year, whichever is higher.

Practical Guidance for non-EU companies to Ensure Compliance with the AI Act

The AI Act has an extraterritorial scope meaning that it also applies to companies operating outside the EU, when they offer AI-related products or services within the EU market. The AI Act therefore applies to the following companies that are established or operating outside the EU:

(a) providers placing on the EU market or putting into service AI systems or placing on the EU market general-purpose AI models;

(b) providers and deployers of AI systems, where the output of such systems is used in the EU.

Familiarize Yourself with the AI Act

In order to determine whether your company falls under the scope of the AI Act, you should become familiar with its provisions. The full text of the AI Act is available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.

If you are a provider, it is important that you are acquainted with all obligations arising from the AI Act, to avoid unnecessary costs. The AI Act governs the whole development process of AI tools, not only their placing on the market and it will not be possible or efficient to comply with all obligations after development of a product.

If you are a deployer, it is important to be aware of your obligations before buying a product to make sure it is adequate for your intended purpose. Transparency obligations imposed on providers will definitely facilitate your compliance, but the responsibility for specific use remains exclusively on the deployer.

Understanding the AI Act is crucial not only for compliance but also for accessing the EU market and gaining a competitive edge by building a positive reputation.

Assess Risk and Compliance

Once you identify as a company falling under the scope of the AI Act, the next step is to classify your product according to the above presented risk assessment. The compliance measures depend of the level of risk (unacceptable, high, medium, low) and the role of your company (provider, deployer, importer/distributor, etc.).

Implement Compliance Measures

The AI Act’s obligations for providers and deployers (that are listed above) also apply to those established outside the EU if their AI system is placed on or used in the EU market. Some obligations may need to be fulfilled through an authorized representative. Therefore, after assessing the current level of compliance, companies should implement the necessary technical, organizational, operational, or legal measures to ensure compliance.

Appoint an Authorized Representative

The AI Act explicitly requires providers established in third countries to appoint an authorized representative in the EU by written mandate before making high-risk AI systems available on the EU market. The authorized representative is crucial for non-EU companies placing AI systems in the EU market. They act as the point of contact for EU authorities regarding compliance with the AI Act and ensure that the AI systems meet regulatory requirements.

Stay Updated

Keep informed of any changes to the AI Act and other relevant regulations. Due to the rapid development of AI technology, continuous adjustments in regulations are inevitable. Staying informed ensures compliance, supports ethical practices, and aligns business operations with the latest standards and best practices.

Regulation of AI in Serbia

In Serbia, the process of drafting the Law on Artificial Intelligence is underway, with the expectation that the draft will be presented by the end of March 2025. Additionally, the Artificial Intelligence Development Strategy for the period 2025 – 2030 has been adopted on January 10, 2025 (“Official Gazette of the RS” No. 5/2025 – “Strategy”). One of the Strategy’s objectives is to “create and align the legal framework and institutions for the safe, secure, and responsible application of artificial intelligence”, which i0s planned to be achieved through the adoption of the Law on Artificial Intelligence and corresponding by-laws. According to the Strategy, the goal is for the Law on Artificial Intelligence to be fully implemented by the end of year 2027.

In the meantime, since March 2023, the Ethical Guidelines for the Development, Application, and Use of Trustworthy and Responsible Artificial Intelligence (“Official Gazette of the RS” No. 23/2023 – “Guidelines”) have been in effect. These Guidelines have been developed in accordance with the recommendations of UNESCO and the European Union. Their application is recommended for both legal and natural persons, in both the public and private sectors.

The Guidelines specifically address: 1. Individuals working on the development and/or application of artificial intelligence systems; 2. Individuals applying artificial intelligence systems, primarily in their work which includes interaction with other individuals (e.g., market participants); 3. Individuals using artificial intelligence systems who are affected by the systems: (a) directly (e.g., using systems to access public services); (b) indirectly (e.g., part of a group researching rare diseases, whose medical data is processed as part of Serbia’s strategy to improve national health); 4. The general public, in the broadest sense.

The Guidelines detail the systems considered to be high-risk systems —systems that tend to directly or indirectly violate the principles and conditions set out in the Guidelines, but don’t necessarily do so. They are not necessarily undesirable, although they need to be specially analyzed and assessed for their impact.

The following principles are recognized as a basis for creating, applying, and using artificial intelligence systems that are trustworthy and responsible towards humanity and worthy of human trust: 1. Explainability and Verifiability, 2. Dignity, 3. Do No Harm, 4. Fairness.

Also listed are the conditions for building and creating trustworthy and responsible artificial intelligence, which are defined through: 1. Action (mediation, control, participation) and supervision; 2. Technical reliability and safety; 3. Privacy, personal data protection, and data management; 4. Transparency; 5. Diversity, non-discrimination, and equality; 6. Social and environmental well-being; 7. Responsibility.

The conditions include both technical and non-technical methods, which confirm and demonstrate compliance with the above-mentioned principles. Technical methods for each of the listed criteria are presented as recommendations, while non-technical methods are provided in the form of a questionnaire intended for assessing individual artificial intelligence systems in terms of their adherence to the fundamental principles and conditions contained in the Guidelines.

As stated in the Guidelines themselves, their intention is to provide a framework and direct the work of all participants in the artificial intelligence ecosystem. In the absence of a more concrete legal framework, they enable further development in this area.

PERSONE CORRELATE

Darija Ognjenović

Iva Popović

Lana Bulatović

ESPERIENZA CORRELATA