The region is overwhelmed by scams using AI-generated videos of celebrities

phto: Fatma Jarghon/Unsplash

Original article (in Croatian) was published on 25/9/2025; Author: Sanja Despot

In addition to fraud, content related to global events – especially politicians, wars, and natural disasters – circulates widely in the region.

According to the analysis conducted by Faktograf, the key theories related to the development of artificial intelligence circulating on social networks in this region are similar to the theories that are found in other countries, and can be reduced to several points.

AI technologies are perceived at the same time as a driver of progress and a source of deep social, economic and existential risks. In addition, there is an awareness of the risks of manipulations triggered and generated by artificial intelligence such as deepfakes, fake news, false representations and the like, which can lead to an increase in distrust in the media and society as a whole. Geopolitical tensions increasingly include the AI aspect, especially in the context of the relationship between global powers and security. There are concerns about mass automation of work, job losses and the transformation of traditional employment, with specific questions about which occupations will survive. Likewise, there is scepticism towards both Western and Chinese AI models, with discussions about localization, sovereignty, and regulatory oversight.

From news to conspiracy theories

This was observed in the Faktograf’s analysis of AI narratives circulating in the region, which was carried out using the Gerluata Juno monitoring-analytical system. The analysis included more than 1000 pieces of content published on more than 50 different pages and channels on social networks Telegram, TikTok, Instagram, Youtube and Facebook, between March and August 2025.

The analysis showed that the main narratives related to AI relate to the technological, geopolitical, economic and social effects of AI, and publications range from news and analysis to conspiracy theories and speculative comments.

For the purposes of the analysis, published under the title “Fraud with a Familiar Face – AI generated content in the SEE region”, author Tajana Broz also collected information about the content encountered by members of the SEE Check network and their portals. The said portals include, beside Faktograf.hr, Raskrinkavanje.me from Montenegro, Raskrinkavanje.ba from Bosnia and Herzegovina, and FakeNews Tragač and RasKRIKavanje from Serbia. Common to all these portals is that they deal with monitoring and exposing disinformation, and given the similarity between languages, disinformation easily flows from one country to another. The information collected does not represent a comprehensive overview of AI-generated content in the SEE region, but stems from the daily operation of the portal and their monitoring of content on social networks.

The analysis focused on how much of the perceived content relates to local actors and contexts, and how much to international ones, as well as to which topics this content relates to, and whether political AI-generated content appears in the countries of the region.

Images and video as most common

During its operation, and particularly in the last two years, Faktograf has detected more than 60 pieces of AI generated content that represented disinformation or manipulation. This content most often appeared in the form of videos and images, textual examples were present in negligible quantities, and exclusively audio content had not been recorded.

In Faktograf’s sample, disinformation related to Croatia (40%) is most present, followed by global amenities, such as those from the US, the Middle East and Europe.

A fake image of French President Emmanuel Macron embracing a man; screenshot: Facebook
A fake image of French President Emmanuel Macron embracing a man; screenshot: Facebook

In the case of local AI-generated disinformation, it relates almost exclusively to financial and health fraud. In addition, AI-generated manipulations related to foreign politics, wars and natural disasters often occur among Croatian users.

This indicates that manipulative and fraudulent content appears in times of crisis and in relation to socially sensitive topics and serves to promote narratives about these events (for example, evoking sympathy for the victims of war or discrediting politicians).

Locally created AI disinformation mainly used for financial fraud

AI-generated content related to fraud most often abuses photos and videos of journalists, politicians, government officials and scientists, especially doctors, the analysis states. Scammers create deepfakes that simulate or use television shows or press interviews, and the guests, politicians, or scientists talk about quick money opportunities or magic cures for widespread health problems.

screenshot: Facebook
screenshot: Facebook

If we look specifically at election periods, in Faktograf’s report on disinformation during the 2025 local elections – “Facts Behind the Campaign” – Faktograf stated that, at the beginning of the super-election in 2024/2025, there were concerns about the possible impact of disinformation generated by artificial intelligence on the electoral process, but this did not materialize.

Political content

AI-generated audio and video political content generally consists of two large groups. The first are scams that use AI-generated footage of politicians, which has already been discussed, and the second are satire, ridicule, political commentary and political propaganda, often resorting to defamations, stereotypes and even spreading hatred, especially towards migrants, foreign workers and the LGBTIQ+ community. Quite often, AI-generated video content is not clearly marked, but from the tone and narrative used in these videos, it is clear that this is political propaganda, and not an attempt to convince voters that they are being shown real events.

In Serbia, the situation is somewhat different. RasKRIKavanje states that during their work, they mostly recorded AI content of a local character, i.e. it concerned the current events in Serbia. For example, a representative of the ruling party posted in April an AI-generated photo with the intention of discrediting students participating in the blockade, which shows students eating at a table, a large Croatian flag hanging above them. In just a few hours, this post reached over 11000 views, 120 shares and more than 200 likes, according to an article on RasKRIKavanje.

Likewise, AI-generated content with falsified statements of Serbian politicians from the opposition was shared on some national TV stations, and there would often be no note on the use of AI for the creation of this content.

“But even when it includes a note, it is very questionable if this is enough to indicate to the audience of these media the inauthenticity of the displayed audio-visual recordings,” pointed out Teodora Koledin, Fake News Tragač journalist.

Textual misinformation is harder to spot

The analysis conducted by Faktograf coincides with the earlier analysis “GOING VAIRAL – Virality of AI-Generated Content” published by the Fake News Tragač portal in February 2025. Fake photos and deepfake videos have been proved to be most common.

The authors of that analysis point out that, although visual AI manipulations in their sample are more prevalent than textual ones, this does not necessarily mean there are fewer textual manipulations in reality, but rather that they are more difficult to detect and prove. The tools that can detect fake photos and videos, although far from infallible, are currently somewhat more reliable than those that detect AI-generated text.

The analysis shows that AI misinformation and manipulation spread through three key patterns: speed (they occur and spread in minutes, while accuracy checks and content moderation are delayed, which creates “vulnerable periods” for mass reach); scalability (once a successful format is easily multiplied in hundreds of variations) and emotional manipulation (provoke strong audience reactions).

The abuse of AI-generated content for financial and medical fraud has been recorded in all countries covered by the analysis, with a common pattern being the use of recognizable public figures to create an impression of credibility.

Delayed reaction by institutions

Unfortunately, in Croatia, institutions do little to protect victims of fraud, including persons whose name and face are used for performing fraud. According to Faktograf’s journalist Ivan Nekić in his article What Croatian Institutions Are (Not) Doing About Fraud Generated by Artificial Intelligence, legislature is not up to date with the development of technology and there are no special statistics within the Ministry of the Interior that could be used to check how many such cases have been recorded at all. Bodies that, according to European and national legislation such as the DSA, have the power to report illegal content to platforms are still becoming acquainted with their own powers. Even in these cases, however, their role is limited to removing the content rather than detecting and punishing the perpetrators.

As other countries record a growing number of AI-generated criminal offenses, it is clear that the failure of Croatian institutions to keep pace with technological developments could endanger citizens’ safety, states the analysis.

The situation is not better in the rest of the SEE region. Since its member states are not members of the European Union, they are not covered by existing European legislation. Maida Salkanović wrote about this extensively for SEE Check in the article Why the Western Balkans Need the Digital Services Act, pointing out that the national legislation in the region is very weak.

According to a survey conducted by Partners for Democratic Change Serbia, efforts to align national legislation with DSA principles were fragmented and insufficient. Instead of adopting a single regulatory framework that would systematically cover platform accountability, transparency of algorithms and user protection, there have only been isolated changes to existing laws, such as those relating to audiovisual media or consumer protection.

In addition to fraud, content related to global events – especially politicians, wars, and natural disasters – circulates widely in the region. Organizations that deal with the disclosure of AI-generated manipulations often encounter identical content among users of social networks in a particular country, which indicates a significant transnationality of disinformation content.

In the region, we also observed entertaining, satirical, and other AI-generated content that is either explicitly labelled as AI or clearly recognizable at first glance as inauthentic. Nevertheless, we repeatedly noticed that such content has also sparked debates and even conspiracy theories among certain users. Cases of publishing AI-generated content in mainstream media are particularly worrying. In some instances, content is shared uncritically, without verifying its authenticity, or deliberately presented without clearly informing viewers that it is AI-generated.

Follow us on social media:

Contact: