Original article (in Croatian) was published on 19/9/2025; Author: Petar Vidov
Croatia has started taking steps towards regulating the development and use of AI technology and punishing its misuse. However, key decisions about what humanity’s future coexistence with AI will look like will be made elsewhere.
“If anyone builds it, everyone dies.”
That is the title of a new book by two AI researchers from California’s Machine Intelligence Research Institute. Authors Eliezer Yudkowsky and Nate Soares argue that it is not possible to build superintelligent AI that will be under human control.
Their arguments are very well worked out. They point out that, even though we cannot predict the exact direction and speed of the development of AI technology, we can assume that the outcome will not be the improvement of humanity’s well-being.
Having studied the development of AI technology for more than two decades, in the book Yudkowsky identifies the greatest failure of experts working on this technology: they failed to understand how intelligence actually works, how it arises and develops. State-of-the-art AI models are extremely competent at solving various complex tasks, but their creators cannot understand exactly how these models work and therefore cannot predict how they will behave.
There have already been cases of large language models trying to deceive and blackmail their creators to achieve some of their own goals [1, 2, 3, 4, 5]. As AI technology advances, it will become increasingly difficult to control the most advanced models, according to Yudkowsky and Soares. We won’t have any understanding of what AI is doing and why.
Why should we fear AI?
Intelligence, Yudkowsky and Soares point out, is the superpower that made humans the dominant species on Earth. Can we really expect to remain the dominant species if an even more intelligent life form emerges?
In their book, the authors put forward the thesis that AI has an evolutionary advantage over humans. If we really develop a superintelligent machine, this machine will be able to reproduce indefinitely, copying itself and thus creating new AIs that will help it in achieving its goals. People find it difficult to understand how AI models they built themselves work; how will they be able to understand AI models created without any human input?
Of course, AI models do not necessarily have to be maliciously inclined towards people. Yudkowsky and Soares draw a comparison with the relationship between human beings and other animal species: most people have nothing against wild animals, they do not wish them harm or try to exterminate them deliberately, but they still disturb their habitats by trying to achieve some of their own goals, most often related to the acquisition of material resources. Therefore, it should not be assumed that a superintelligent machine will be overly attentive to people in achieving its goals, according to Yudkowsky and Soares.
The only way to avoid the AI apocalypse, Yudkowsky and Soares are convinced, is to stop the development of the technology before it’s too late. Many other AI experts share a similar view. Retired scientist Geoffrey Hinton, who earned the nickname “the godfather of AI” due to his contribution to the development of the technology, also publicly warns of the risks that come with the accelerated development of this technology and advises that we should hit the brakes.
It’s not just researchers and scientists who claim this. These are the words of Sam Altman, director of OpenAI, the organization that created ChatGPT:
“I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning.”
Warnings about the existential threat have also been expressed by the heads of Anthropic, the company that developed Claude, and of Google’s AI laboratory DeepMind. Elon Musk, owner of xAI, has repeatedly said that he fears that AI could wipe out the human race from the face of the Earth.
There are, of course, AI optimists, experts who claim that such existential fears are overblown and unfounded. Both sides agree that this is the most powerful technology humans have ever developed, but the optimistic camp is confident that AI will usher humankind into an era of well-being. However, no one can say for sure which side is right. As a significant part of the world’s top AI experts express their fears about the AI apocalypse, their words are being taken seriously by the international public.
Croatia and the AI revolution
Some countries are therefore developing regulations that should ensure that security issues are not being neglected in the development and application of AI. The issue is also being addressed by the European Union, whose AI Act – criticized by many experts as insufficiently ambitious – should soon begin to apply. We explored what Croatian institutions are doing to prepare for the AI revolution.
So, what will the Republic of Croatia do about artificial intelligence? At least a partial answer to this question should be offered by the National Plan for the Development of Artificial Intelligence for the period until 2032, with the associated Action Plan for the period from 2026 to 2028.
A working group gathered by the Ministry of Justice, Administration and Digital Transformation and led by Ivan Lakoš, Director-General of the Directorate for Digital Society Development and Strategic Planning at Damir Habijan’s ministry, is currently working on the development of these strategic documents.
Both documents should be completed by the end of the year. According to Habijan’s ministry, the National Plan should be ready by the end of October, after which it will enter into public consultation. The Action Plan should be developed by the end of December 2025.
According to the ministry, the working group has held several meetings, and workshops/consultations on various topics have been organised. Among the things considered by the working group are possible ways of using AI in Croatia, with particular emphasis on its potential to improve public administration, judiciary, health, education and the economy and create a stimulating regulatory and investment framework for the development of AI tools in Croatia.
“We believe that the last domain is extremely important, which is why it is important to find a balance between innovation, research and development of artificial intelligence, as well as to take into account the security of citizens and their data,” said Minister Habijan, announcing the work on these strategic documents.
At the same time, the Government of the Republic of Croatia submitted to the parliamentary procedure a proposal for amendments to the Criminal Code, which introduces a new criminal offense: endangering life and property through the use of an artificial intelligence system. If someone commits murder or serious bodily injury or serious property damage, they will be punished by years of imprisonment.
In other words, things are moving, but not very fast. For now, the Croatian authorities are focusing on creating a stimulating environment for the AI economy. Potential dangers or abuses are observed primarily through regulations for their subsequent punishment and attempts to protect the privacy of citizens.
The issue of privacy
The latter could prove to be quite a challenging task. The president of the Signal Foundation, which created the popular app of the same name for secure, encrypted communication, recently warned in a column for The Economist that the era of AI agents could mark the end of privacy in the digital space.
Tools such as ChatGPT or Google Gemini are being developed to perform various digital services for their users, such as booking travel, buying food or any other services that can be performed online. To do this, they will need access to a number of users’ personal data, such as bank account information.
“Although we have not yet reached a fully ‘agent’ future, the harmful consequences are already very visible. Researchers have shown that AI agents can be led to disclose sensitive data they have access to, or can be tricked by hackers into taking harmful action – from extracting confidential code to causing chaos in homes by activating smart home devices,” writes Signal President Meredith Whittaker.
She warns that these agents are increasingly being integrated into the operating systems of computers and smartphones, allowing them to access various user data, including those that should be encrypted. “Roughly speaking, the path we are currently taking towards ‘agent’ AI is leading to the abolition of privacy and security at the level of applications,” Whittaker believes.
Who will be making the decisions?
When the Global Index on Responsible AI was published last year, Croatia ranked among the worst-ranked countries in Europe. We are followed by Serbia, North Macedonia, Georgia, Montenegro, Moldova, Albania, Kosovo and Belarus. The best ranked are the more developed European countries: The Netherlands, Germany, Ireland, United Kingdom. The United States is in fifth place. On the total list, Croatia ranks 49th.
This is a survey of the extent to which countries contribute to the responsible development of AI, especially in the sphere of human rights preservation. However, one Croatian initiative in the research was particularly praised, i.e. highlighted as a “bright spot”. The initiative in question refers to the amendments to the Labour Act that impose the obligation of human supervision in automated management systems in order to protect the safety and health of workers. The same legal amendments also elaborated additional rights of employees related to the privacy of their personal data.
The additional steps taken by the Croatian authorities to regulate the use of new digital technologies will certainly be sufficient to improve the Croatian rating in future editions of this index. However, key decisions on AI technology and its (safe?) use will be made outside of Croatia, and probably outside of Europe. The development of state-of-the-art AI models is predominantly happening elsewhere, primarily in the United States and China. In the coming years, their leaders will make decisions that will direct the development of AI and thus significantly shape the future that awaits us all.
Destruction or prosperity, or something in between, that is the question.