How the European Union regulates digital services and AI technology

Unsplash/Sigmund
Unsplash/Sigmund

Original article (in Croatian) was published on 3/14/2025; Author: Ivica Kristović

Faced with criticism from the US, European officials respond that EU rules aim to ensure a safe internet and the same values in the digital and physical worlds.

Since coming to power, the new administration of US President Donald Trump has taken a very critical stance towards European Union regulation related to digital services and the use and development of artificial intelligence technology.

It is a clash of two concepts in which the American side claims that it stands for the right of speech and free business of large technology companies, while the EU responds that it wants to establish a secure Internet.

Republican Brendan Carr, whom Trump appointed to head the US Federal Communications Commission (FCC), commented on March 3 at the Mobile World Congress in Barcelona that European regulation related to content moderation on the Internet is incompatible with the American tradition of freedom of speech, and he expressed concern that it will suppress freedom of expression.

“There are concerns about the approach that Europe has taken with the DSA (EU Digital Services Act),” Carr said, adding that for American technology companies operating in Europe, this approach is inconsistent with the American tradition of free speech and the obligations that these companies have regarding pluralism of opinion.

JD Vance: “Authoritarian Censorship”

Carr is the second high-ranking official to criticize European regulation in recent months. The first was US Vice President JD Vance, who called European rules on moderating content on social networks “authoritarian censorship” at the AI Summit in Paris. US President Trump has also made freedom of speech, as understood by the current US administration, one of the key themes of his term when he signed an executive order on his first day in office claiming to restore freedom of speech and declare an end to censorship.

European Commission spokesman Thomas Regnier said Carr’s accusations of censorship in the DSA were completely unfounded. “The aim of our digital regulations, such as the DSA, is to protect fundamental rights. We all agree on the need to ensure that the Internet is a safe place, as Vance said at the AI Action Summit in Paris,” Regnier said.

Virkkunen: “We defend the same values in the digital and physical world”

Henna Virkkunen, Vice-President of the European Commission for Technological Sovereignty, Security and Democracy, spoke about the principles related to the development of artificial intelligence technologies, as well as regulation on the internet, during a panel at the Munich Security Conference .

“In the European Union, we have an approach that we want to defend the same values in the digital world that we have in the physical world. We want to have a safe, fair and democratic environment in the digital world,” she said, adding that they want to establish these principles through the Digital Services Act (DSA) as well as the Artificial Intelligence Act (AI Act).

“The key principle is risk assessment. For example, with digital services, large Internet platforms, which have great power in our democracies and economies, have more obligations. They must assess and mitigate the systematic risks they pose to electoral processes or civil discourse. They must have their own processes that mitigate systematic risks,” Virkkunen explained, adding that similar principles have been established in the use of artificial intelligence.

“The AI Act has banned some of the practices. You cannot do so-called social scoring in the EU or track the emotions of workers in the workplace because that goes against human rights and the fundamental values of the European Union,” Virkkunen said.

Artificial Intelligence Act

Below, we will explain these two European Union documents in more detail – the Artificial Intelligence Act and the Digital Services Act.

The beginning of the Artificial Intelligence Act lists the positive aspects of introducing artificial intelligence technology into various systems:

“Artificial intelligence is a rapidly evolving set of technologies that contribute to a wide range of economic, environmental and social benefits across a range of industries and societal activities. By better predicting, optimising operations and resource allocation, and personalising digital solutions available to individuals and organisations, the use of artificial intelligence can provide businesses with key competitive advantages and support socially and environmentally beneficial outcomes, for example in healthcare, agriculture, food safety, education and training, media, sport, culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, environmental monitoring, conservation and restoration of biodiversity and ecosystems, and climate change mitigation and adaptation.”

Risks in using artificial intelligence tools

However, it then points out that such systems can also cause harm: “At the same time, depending on the circumstances of its specific application, use and level of technological development, artificial intelligence may create risks and cause harm to public interests and fundamental rights protected by Union law. That harm may be material or non-material, including physical, psychological, social or economic harm.”

The EU has divided these risks into four levels: unacceptable risks, high risks, transparency risks, and minimal or no risks.

The unacceptable risks include: harmful manipulation and deception based on artificial intelligence; harmful exploitation of vulnerabilities based on artificial intelligence; social scoring; assessment or prediction of the risk of an individual crime; untargeted scraping of the internet or CCTV footage to create or expand facial recognition databases; emotion recognition in the workplace and in educational institutions; biometric categorization to derive certain protected features; and real-time remote biometric identification in public places for law enforcement purposes.

“Manipulative techniques enabled by artificial intelligence can be used to induce individuals to engage in unwanted behavior or to deceive them by inducing them to make decisions in a way that undermines and violates their autonomy, decision-making and free choice. The placing on the market, putting into service or use of certain AI systems with the aim or effect of significantly distorting human behavior, where significant harm is likely to occur, in particular that which has sufficiently serious adverse effects on physical or psychological health or financial interests, are particularly dangerous and should therefore be prohibited. Such AI systems use subliminal components such as sound, image and video stimuli that individuals cannot perceive because they go beyond human perception, or use other manipulative or deceptive techniques that undermine or violate a person’s autonomy, decision-making or free choice in a way that people do not consciously perceive these techniques or because of which, if they are aware of them, they are nevertheless deceived or cannot control or resist them,” the Artificial Intelligence Act states.

Fraud

Classic manipulations that can be classified under this heading are already being seen in practice, and we have already written about many of them on Faktograf. AI tools can already produce relatively accurate fake audio and video recordings that can lead citizens to make various harmful decisions related to, for example, voting in elections, investing money in risky investments, or purchasing fake medicines. While artificial intelligence tools were not used for electoral manipulation against opponents during the 2024 super elections in Croatia, ads using this technology to sell people fake investment platforms can be found on social networks every day, as have been recorded cases of the exploitation of reputable doctors who, through fake AI-generated content, are allegedly trying to persuade people to buy dubious medicines.

Categorization based on collected biometric data of persons is also problematic. “Biometric categorization systems based on the biometric data of physical persons, such as an individual’s face or fingerprint, in order to deduce or draw conclusions about his political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation should be prohibited,” the document states, and then adds that “AI systems that enable the social evaluation of natural persons used by public or private actors may result in discriminatory outcomes and excluding certain groups”.

“They may violate the right to dignity and non-discrimination and values such as equality and justice. Such AI systems evaluate or classify individuals or groups of individuals based on multiple data related to their social behavior in multiple contexts or known, inferred or predicted personal or personality traits over time. The social rating obtained by such AI systems may result in harmful or unfavourable treatment of individuals or entire groups of individuals in social contexts unrelated to the context in which the data was originally generated or collected, or in harmful treatment that is disproportionate or unjustified in relation to the severity of the consequences of their social behavior,” it states.

This is intended to prevent a large system from collecting data about a person from various databases and determining a “social rating” on that basis.

The aim is also to limit biometric identification of persons in public places for the purpose of criminal prosecution in order to avoid intrusion into private life, but also so that the population does not experience constant surveillance. “Therefore, the use of these systems for the purpose of criminal prosecution should be prohibited, except in strictly listed and narrowly defined situations in which it is necessary to achieve a substantial public interest, the importance of which outweighs the risks,” the document states, and then goes on to elaborate in detail in which cases such surveillance would be permissible.

“The placing on the market, putting into service for that specific purpose or use of AI systems that create or expand facial recognition databases by non-targetedly collecting facial images from the internet or from CCTV footage should be prohibited as this practice contributes to a sense of mass surveillance and can lead to serious violations of fundamental rights, including the right to privacy,” the document states.

High-risk use cases for AI tools

High-risk use cases of AI that may pose a serious risk to health, safety or fundamental rights The AI Act includes: security components of AI in critical infrastructures (e.g. transport), the failure of which could endanger the life and health of citizens; AI solutions used in educational institutions, which can determine access to education and the course of one’s professional life (e.g. exam scoring); security components of products based on AI (e.g. the application of AI in robotic surgery); AI tools for recruitment, workforce management and access to self-employment (e.g. CV sorting software for employment); certain use cases of AI used to enable access to essential private and public services (e.g. credit scoring that denies citizens the opportunity to obtain a loan); AI systems used for remote biometric identification, emotion recognition and biometric categorization (e.g. an AI system for retroactively identifying shoplifters); cases of the use of artificial intelligence in law enforcement bodies that may affect the fundamental rights of people (eg evaluation of the reliability of evidence); artificial intelligence use cases in migration, asylum and border management (e.g. automated consideration of visa applications), as well as artificial intelligence solutions used in the judiciary and democratic processes (e.g. artificial intelligence solutions for the preparation of court judgments).

Such systems are subject to strict obligations before being placed on the market, which, among other things, relate to ensuring high quality of data sets and detailed documentation of the system and its purpose.

To address the risk of non-transparency in the systems being introduced, the AI Act introduces specific obligations to ensure that people are informed. For example, when using AI systems such as chatbots, people should be aware that they are interacting with a machine so that they can make an informed decision. It also requires that content created by AI is identifiable and clearly and visibly labelled. The EU stresses that most AI systems currently in use pose minimal or no risk, such as AI-enabled video games or spam filters.

The AI Act entered into force in the EU on 1 August 2024 and should be fully applicable from 2 August 2026, but three exceptions have been set: prohibitions and obligations on AI literacy started to apply on 2 February 2025; governance rules and obligations for general-purpose AI models start to apply on 2 August 2025; rules for high-risk AI systems embedded in regulated products have an extended transition period until 2 August 2027.

In December 2024, the Croatian Government designated the domestic competent authorities for the implementation of the Artificial Intelligence Act. The competent authorities are: the Ombudsman, the Ombudsman for Children, the Ombudsman for Gender Equality, the Ombudsman for Persons with Disabilities, the Personal Data Protection Agency, the State Electoral Commission and the Agency for Electronic Media. “This step ensures the compliance of the Republic of Croatia with European rules and contributes to the protection of fundamental rights of citizens in the context of the development and application of artificial intelligence,” the Government stated.

Digital Services Act

Another document that the new US administration is criticizing the European Union for is the Digital Services Act. In Croatia, this law reached its second reading in the Parliament this week.

As explained in the text on the Faktograf Association website, it was adopted by the European Parliament in October 2022 with the aim of preventing illegal and harmful activities on the internet and the spread of disinformation. The European Commission explains on its website that the act regulates “online intermediaries and platforms such as online trading platforms, social networks, content sharing platforms, app stores and travel and accommodation platforms”, and that its implementation guarantees the security of users and the protection of fundamental rights, and creates a fair and open environment on online platforms”.

Reducing the potential risks of large platforms

The European Commission states that the obligations of different online actors are aligned with their role, size and influence in the online environment. Very large online platforms are identified as particularly risky in terms of spreading illegal content and harmful effects on society, and special rules are foreseen for those used by more than 10 percent of the EU’s 450 million inhabitants. When looking at the list of these platforms , it is clear that American technology companies predominate, which is why the document has come under criticism from the US administration.

As we wrote earlier, instead of categorical bans that would be difficult to enforce in automated decision-making systems, the EU has opted to reduce the risks of new technologies and penalize those who do not introduce appropriate “safeguards”. Thus, large online platforms and very large online search engines are required to “identify, analyze and assess with due diligence all systemic risks in the Union arising from the design or functioning of their service and its related systems, including algorithmic systems, and from the use of their services”.

They must report annually on the assessed systemic risks, which include: “the dissemination of illegal content through their services; any actual or foreseeable negative effects on the exercise of fundamental rights, in particular the right to human dignity as set out in Article 1 of the Charter , the right to respect for private and family life as set out in Article 7 of the Charter, the right to the protection of personal data as set out in Article 8 of the Charter, the right to freedom of expression and information, including freedom and pluralism of the media, as set out in Article 11 of the Charter, the right to non-discrimination as set out in Article 21 of the Charter, the right to respect for the rights of the child as set out in Article 24 of the Charter and the right to a high level of consumer protection as set out in Article 38 of the Charter; any actual or foreseeable negative effects on civic discourse and electoral processes and on public security; any actual or foreseeable negative effects related to gender-based violence, the protection of public health and minors and serious negative consequences for the physical and mental well-being of a person”. It is stipulated that large platforms must establish “reasonable, proportionate and effective measures to mitigate the risk”.

DSA in second reading in Parliament

Croatia initiated the procedure for adopting the DSA at the end of the mandate of the previous government of Andrej Plenković, and it was not until January 2024 that the Ministry of Economy published a public consultation on the Draft Law on the Implementation of the Digital Services Act. As there had been no further progress by July, the European Commission initiated proceedings against Croatia for breach of EU law. At the end of August, the new government, in which the Ministry of Justice, Administration and Digital Transformation took over responsibility for the DSA, adopted the Draft Law, and at the end of September the Croatian Parliament debated it in the first reading.

The Government has now sent the bill to the parliamentary procedure in its second reading. HAKOM (Croatian Regulatory Authority for Network Industries) is designated as the national Coordinator for Digital Services, which meets strict requirements (independent of any external influence, autonomous in managing its budget, and has sufficient technical, financial and human resources).

The public authorities that the DSA would grant in Croatia the authority to issue orders to take action against illegal content, such as removing or limiting the reach of content, would be: the State Attorney’s Office and the Ministry of the Interior for illegal content that constitutes a criminal offense and a misdemeanor; the Personal Data Protection Agency for illegal content that constitutes a violation of regulations governing the protection of personal data; the Customs Administration of the Ministry of Finance for illegal content that constitutes a violation of intellectual property rights; the State Inspectorate for illegal content that constitutes a violation of regulations within the scope of inspection of the State Inspectorate in accordance with the powers determined by special regulations; the Ministry of Health for illegal content that constitutes a violation in the field of healthcare, medicines and medical products and biomedicine, in accordance with the powers determined by special law; the Electronic Media Agency for illegal content that constitutes a violation in the field of electronic media in accordance with the powers determined by special regulations, and other bodies in accordance with the powers determined by special laws governing their scope of work.

These bodies will be able to issue orders to take action against illegal content and orders to provide information ex officio. The provider of mediation services and the recipient of the service will be able to file a complaint against these orders with the Municipal Misdemeanor Court in Zagreb, while an administrative dispute will be able to be initiated in the administrative court against the decision of the Digital Services Coordinator deciding on complaints.

Fines for violating the provisions of the law are set at up to 6 percent of the service provider’s annual worldwide revenue, which can be significant amounts for large technology companies.

Along with differing views on the conditions for ending the war in Ukraine, the trade war through tariffs and regulation of the internet space are likely to be among the issues that will weigh on EU-US relations under President Donald Trump in the coming years. Because the views on the two sides of the Atlantic differ significantly: while the US claims that the EU is introducing censorship and placing too much of a burden on, mainly, American technology companies, the European Union responds that it only wants to have a secure internet with as much transparency and as few risks for its citizens as possible.

Follow us on social media:

Contact: