Original article (in Croatian) was published on 3/6/2025; Author: Ivan Nekić
Although the possibilities of artificial intelligence in creating frauds are becoming more sophisticated, Croatian institutions are not doing much in terms of AI so far.
Videos generated by artificial intelligence that use the voice and image of a famous person from social or political life in Croatia have long been nothing new. The target of fraudsters has been the Croatian state leadership on several occasions – Prime Minister Andrej Plenković and President of the Republic Zoran Milanović, and in almost all cases it was investment platforms that allow citizens to make easy and quick profits.
In addition to investment scams, a common example of fraud using artificial intelligence is the sale of miracle drugs, which will solve certain diseases “in the blink of an eye”. For these purposes, fraudsters use the image and work of famous doctors and scientists – neurologist Josip Paladin, immunologist Stipan Jonjić, infectious disease specialist and director of the Clinic for Infectious Diseases “Dr. Fran Mihaljević” Alemka Markotić, and molecular biologist Ivan Đikić . AI disinformation has not bypassed the famous entrepreneur Emil Tedeschi, as well as famous television presenters and hosts such as Nina Kljenak, Igor Bobić, Zoran Šprajc, and Mojmira Pastorčić.
Former Minister of Health was also the target of AI-generated scams
One of the most notable cases of using a well-known politician to promote a miracle cure was that of the now former Minister of Health Vili Beroš. Let us recall that in September 2023, a recording (archived here) was published on the Facebook profile “Doctor Thomas” in which the Croatian Minister of Health Vili Beroš says that he has invented a natural cure for joint pain. It was, of course, a hoax. Beroš’s statement, which is very convincing in terms of voice and manner of speaking, was computer-generated, using AI tools for creating audio recordings. Along with the audio recording, footage from an interview that Vili Beroš actually gave to the 24 sata portal, which was published as part of an article from April 20, 2020, was used .
The Ministry of Health and Beroš himself told Faktograf at the time that it was a scam, and announced that they would take certain measures in terms of reporting it to the competent institutions. Given that the dissemination of recordings generated by artificial intelligence on social networks is intended to deceive citizens, we were interested in what the Ministry of the Interior, the State Attorney’s Office (DORH) and HAKOM as the regulator are doing to protect citizens from such scams. We additionally asked the Ministry of the Interior and the DORH whether they had received any reports specifically related to the dissemination of fake AI recordings and whether they had conducted any investigations in this regard.
The Ministry of Interior has no data on the number of personal data misuses
The State Attorney’s Office briefly told Faktograf that they were “conducting an investigation into a case with a similar topic.” In a response signed by the deputy municipal state attorney, they noted that the investigations are secret in accordance with Article 206 f, paragraph 1, of the Criminal Procedure Code.
The Ministry of the Interior told our portal that they do not have regulatory authority over content in digital media. “Such or similar advertisements represent a preparatory act for the commission of various criminal offenses, and they are mainly criminal offenses of fraud as a predicate offense, where the perpetrator misused the personal data of another person, however, there is no information in official statistics on how many criminal offenses of fraud were committed through identity theft,” the Ministry of the Interior points out.
“Identity theft” does not exist in the Criminal Code
They further explain that the phrase “identity theft” is used in everyday speech, but that such an act does not exist, that is, it is not described in the Criminal Code.
“In this specific case, it could also be a criminal offense of Unauthorized Use of Personal Data, and the perpetrator’s goal is probably to commit investment fraud. We do not have statistical data on the manner in which criminal offenses are committed in relation to investment fraud,” the Public Relations Service said. They added that they are cooperating with the Meta company for the purposes of investigating criminal offenses, and that the cooperation is at a satisfactory level.
HAKOM, as a regulator, has not yet received any reports related to identity theft using artificial intelligence, the agency said, adding that such a report would not even fall under HAKOM’s jurisdiction, but rather (given that it implies identity theft) should be sent to the police and/or the Personal Data Protection Agency.
We additionally asked HAKOM whether anything will change in the regulation of fake content generated by artificial intelligence with the adoption of the Act on the Implementation of the Digital Services Act in Croatia.
They point out that the Digital Services Act (DSA) enables the rapid removal of illegal content on the internet, without distinguishing between illegal content posted by the service user and that generated by artificial intelligence. They stress, however, that “the DSA does not address the issue of detecting and punishing perpetrators, but deals exclusively with the issue of whether the service provider complies with the provisions of the DSA, which in this case would include, inter alia, whether it complies with orders to remove such content issued by competent judicial or administrative authorities”.
“With the adoption of the DSA Implementation Act, certain national authorities will be able to issue orders for the removal of illegal content to any provider of mediation services in the EU. According to the draft proposal of the Act, these authorities, depending on the type of illegal content in question, are the DORH, MUP, AZOP, DIRH, Customs Administration and the Ministry of Health, and it is likely that the final text will also include AEM,” explained the Croatian Regulatory Agency for Network Industries when asked by Faktograf.
British police warn that artificial intelligence is being used for much more dangerous purposes
And while on the one hand, institutions in Croatia, judging by the responses received by Faktograf, are not too concerned about the use of artificial intelligence for dangerous offenses and crimes, in the United Kingdom, for example, this danger is taken much more seriously.
Although in Croatia, artificial intelligence is still used in scams in a relatively harmless way, mainly for financial fraud, in other countries in Europe and around the world, it is being used for much more dangerous purposes. Pedophiles, fraudsters, hackers and criminals of all kinds are increasingly exploiting artificial intelligence (AI) to target victims in new and harmful ways, a senior police chief of the British police who deals with artificial intelligence warned the British Guardian in November last year.
Alex Murray, the UK’s national police chief for artificial intelligence, said the use of the technology was growing rapidly due to its increasing availability and that police needed to “act quickly” to detect threats in a timely manner. “We know throughout the history of policing that criminals are inventive and will use anything they can to commit crime. Now they are certainly using artificial intelligence to commit crime. It can happen at an international and serious level of organised crime, and it can happen in someone’s bedroom… You can think of any type of crime and put it through the lens of AI and say, ‘What’s the opportunity here?’” Murray said.
Murray points out that pedophiles are using AI the most to create photos and videos depicting child sexual abuse. “We’re talking about thousands and thousands and thousands of photos,” Murray said. “All of those photos, synthetic or otherwise, are illegal, and people are using generative AI to create photos of children doing the most horrific things.”
One of the examples is from August 2024 when 27-year-old Hugh Nelson from Bolton was sentenced to 18 years in prison after he offered a paid service to online pedophile networks in which he used AI to generate wanted photos of abused children.
Italian police found almost a million euros raised in AI scam in February
That the victims of artificial intelligence are not just “ordinary” people and that it can be a very persuasive tool is proven by a very recent story from Italy. Italian police found and froze almost one million euros ($1.04 million) that a leading businessman transferred to a foreign bank account after he became the victim of a fraud in which artificial intelligence was used.
Scammers used AI to mimic the voice of Italian Defense Minister Guido Crosetto, making calls claiming to be seeking urgent financial assistance for the release of kidnapped Italian journalists in the Middle East.
Some of Italy’s most prominent business figures, including fashion designer Giorgio Armani and Prada co-founder Patrizio Bertelli, were targeted, prosecutors in Milan said. However, only Massimo Moratti, the former owner of Inter Milan football club, is believed to have sent the requested funds.
AI is also used for sexual extortion
The same technology is also used for sexual extortion, a type of online blackmail in which criminals threaten to publish inappropriate photos of victims if they do not pay money or meet demands.
Scammers have previously used photos that victims have shared themselves, often with ex-partners or abusers who used fake identities to gain their trust, but artificial intelligence can now be used to “unmask” and manipulate photos that have already been posted on social media.
Murray points out that hackers also use artificial intelligence to look for weaknesses in targeted code or software and to provide “areas of focus” for cyberattacks. “Most AI crime is currently related to child abuse photos and scams, but there are many potential threats,” he added.
The police chief for AI expects that, considering that AI technology is becoming more and more advanced and that more and more advanced text and photo software are coming to market and in widespread use, criminals of all kinds will increasingly use artificial intelligence for malicious purposes.
“Sometimes you can tell if something is an AI photo, but very soon that will disappear. People using this type of software at the moment still need to have some knowledge to create such content, but these tools will soon become very easy to use. Ease of access, realism and accessibility are three vectors that are likely to increase… We, as police officers, need to move quickly in this space to stay on top. I think it is a reasonable assumption that between now and 2029 we will see a significant increase in all these types of crime and we want to prevent that,” concludes the head of the British police department for AI.
Artificial intelligence is also a threat to the financial system
Fraudsters are increasingly using generative artificial intelligence tools in sophisticated ways to defraud financial institutions, a Wall Street source warned The Wall Street Journal.
For example, they create synthetic identification documents to help open new fake brokerage accounts or take over existing client accounts, according to a report by financial industry regulator FINRA, the self-regulatory arm of Wall Street. Fraudsters also in some cases use deepfake technology to create AI-generated photos of people with fake AI-generated identities, FINRA points out.
The explosive growth in the use of artificial intelligence is a double-edged sword for financial companies. On the one hand, the technology allows them to increase the speed and scale at which they handle intensive compliance tasks, such as conducting customer background checks or monitoring transactions for suspicious activity.
On the other hand, AI can amplify, for example, voice deepfakes with the aim of fraud. The Wall Street regulator says it has come across examples of deepfake audio and videos scammers use to impersonate famous financial gurus, and that AI is being used to craft phishing emails tailored to individual targets.
US law enforcement agencies, including the Federal Bureau of Investigation (FBI), have warned of the potential for AI to be used for criminal purposes. Fraud losses in the US financial services industry could reach $40 billion by 2027, up from $12.3 billion in 2023, all due to the impact of artificial intelligence, Deloitte’s Financial Services Center predicted in a 2024 report.
How AI is used in the fight against fraudsters
Although artificial intelligence is misused for fraud purposes, there are also numerous examples of companies using it in the fight against fraudsters. For example, UK mobile phone service provider O2 introduced “Daisy”, an AI-powered virtual assistant designed to waste fraudsters’ time and prevent them from targeting real victims.
Daisy, modeled as a chatty, elderly “grandmother,” engages callers in lengthy conversations about her cat and other personal topics, effectively keeping scammers on hold for up to 40 minutes per call, it said. The program, developed with the help of scammer-baiting software engineer and YouTuber Jim Browning, uses number seeding to ensure Daisy’s number appears on scammers’ calling lists.
“We are committed to playing our part in stopping fraudsters, investing in everything from firewall technology to blocking fraudulent messages and detecting unwanted calls using artificial intelligence to protect our customers,” said Murray Mackenzie, the company’s fraud director.
On the roads in the UK, artificial intelligence is being tested to detect drunk or drugged drivers in Devon and Cornwall, writes the BBC. The Heads-Up camera system, developed by Acusensus, can identify behavior consistent with the consumption of alcohol or drugs while drivers are on the road, it said. After detecting a suspicious driver, the police stationed nearby can stop the vehicle for further investigation on the road.
The most common types of AI scams
The American brand Norton, widely known as a company that cares about computer security, provides an overview of the five most common types of AI scams in its November 2024 article:
AI chatbot scams: These are fake online conversations in which an AI imitates a person, such as a customer support representative. The AI chatbot imitates a human and usually asks the user for personal information. AI chatbots often present themselves as trustworthy, such as technical support on a well-known website. They can also lure you in by claiming you have won a prize, offering investment advice, or pretending to be a person on a dating site.
AI deepfake scams: These involve a fake video of a real person, usually a celebrity, created by a scammer who has trained an AI tool on real videos and vocal recordings of that person. Once the AI has enough data, it can generate videos of the person in almost any situation. Deepfake videos usually feature celebrities or politicians because there is a lot of footage of them for the AI to learn from. A hacker might send you a video of a celebrity you admire asking you to donate to a cause. The link in the video could lead you to a malicious website. Or, a politician you trust might announce a generous tax refund, directing you to a tax scam website that asks for your personal identification number (PIN).
AI Investment Scams: In this type of scam, scammers convince people to part with their money by promising huge returns or encourage people to sign up for illegitimate cryptocurrency or stock trading platforms. As they do in deepfake scams, the scammer can use artificial intelligence to impersonate a famous person and ask you to invest your money. Once you send your money, the scammers take it away and disappear.
AI phishing scams: In these, scammers use AI to manipulate you into giving up sensitive information. In a phishing scam, the attacker often pretends to be someone trustworthy; and now scammers are using AI tools to carry out these scams on a massive scale. Phishing emails have increased by more than 1,200% since the end of 2022, largely thanks to the use of generative AI tools. Artificial intelligence can now create thousands of different personalized phishing messages in seconds.
AI voice cloning scams: These use AI technology to mimic a real person’s voice. Scammers then send voice messages on social media or make phone calls pretending to be the target’s loved one or a celebrity. AI voice scams are like audio versions of deepfakes. Simple versions are pre-recorded messages sent through phone apps or social media. For example, you might receive a fraudulent call from a politician asking you to donate to their campaign via a malicious website.
How to protect yourself from AI scams?
In its article from November last year, Forbes provides seven tips that can be useful for combating fraud generated by artificial intelligence:
– Be wary of unexpected messages, even if they appear to be from someone you trust. Take the extra step to verify their legitimacy through official methods: call them directly, send a personal message, or verify face-to-face if possible. Carefully examine the language and tone of messages for any inconsistencies or signs of AI-generated content.
– Enable multi-factor authentication (MFA) on your accounts and devices to strengthen your security and not rely solely on passwords.
– Be wary of urgent requests to do something, as these are classic scam tactics. If they pressure you to do something urgently, it is most likely a scam.
– Be wary of overly personalized messages, as AI can use your online activity to create very convincing traps. Remember, the goal of every scammer is to gain your trust – make it your goal to question everything and remain skeptical.
– Always stay up to date with the latest tactics used to use AI for scam purposes.
– Use strong security measures such as email filtering, web content filtering, and antivirus protection to identify and block potential threats.
– Report all fraud attempts to relevant authorities and organizations to help identify and address new threats.
Although the capabilities of artificial intelligence in creating fraud are becoming more sophisticated every day, Croatian institutions responsible for all forms of fraud do not seem to be doing much about AI for now. Police forces in more developed countries have people in their ranks who specifically deal with fraud involving artificial intelligence, but the police in Croatia still do not record how many frauds there were in which personal data was misused, or, colloquially speaking, identity theft. Although we also asked the State Attorney’s Office in more detail about fraud involving artificial intelligence, we received only a very brief response that they were investigating a similar case, without providing any details.