Original article was published on 11/23/2025; Author: Ana Sorić
Taylor Swift supported Trump, Biden urges people not to go to the polls and Kamala Harris’ fake voice – these are just some examples of AI-generated content from the U.S. presidential election.
In the U.S. presidential election, Republican Party candidate Donald Trump won a second term as the head of the most powerful country in the world. After a fiercely contested campaign that deeply divided U.S. citizens, Trump unexpectedly but convincingly defeated Democratic candidate Kamala Harris.
As in previous U.S. election cycles, where voter support heavily relied on how candidates presented themselves online, particularly through large social networks, this year’s campaign saw a massive spread of disinformation through these channels. The development of technology, in the first place the possibility of easily generating false photographs and recordings created by tools that use artificial intelligence, has also created a new segment of attempts to influence the will of voters.
Given that the full potential of AI is not yet understood, its implications have raised significant concerns. The impact on elections is especially scrutinized. While there is hope that AI will streamline various processes, previous research on its effects is both promising and alarming.
Rumours about presidential candidates, misleading accusations, but also outright lies flooded the Internet ahead of the U.S. presidential election. Hundreds of misinformation about voting irregularities spread across private profiles, both among Republican supporters and among Democrats, the BBC wrote ahead of the elections. In this article, the BBC detected users of the social network X (formerly Twitter) who spent days sharing content with electoral misinformation, AI-generated photos and videos, and unconfirmed conspiracy theories, all to earn thousands of dollars. They found networks with dozens of profiles who shared their content with each other, all to increase their reach and thus their earnings. Coordination of posts was done through forums and group chats.
Many of these posts gained millions of views. Some of the reports contained classic misinformation and false or unconfirmed narratives, while tools “powered” by artificial intelligence were used to create some of them. The resulting images and photographs give the disinformation itself a dose of authenticity that the earlier forms lacked, making it more convincing to voters.
Fake photo of Kamala Harris from McDonald’s
Below is a summary of the most widely spread confirmed disinformation during the U.S. presidential campaign.
During the campaign for the U.S. presidential election, Kamala Harris told voters, among other things, that she worked at McDonald’s as a young woman. On 26 October this year, a photo of a young Harris in a fast-food chain uniform appeared on the Meta-owned Threads platform, but it soon turned out that the photo had been edited. The original features Suzanne Bernier, who died of breast cancer in 2007, so the family published a photo of her from her youth. It turns out that a Trump supporter placed Kamala Harris’ face on a photo and shared it on Threads. Although he mentioned in the first post that the photo was fake, it was spread without that context.
The profile of Brown Eyed Susan had many views on X, whose posts go in favour of Kamala Harris. One of her most viral posts had about three million views, and in it she spread a false narrative and conspiracy theory that Donald Trump staged the attack on himself.
Taylor Swift’s Alleged Endorsement of Donald Trump
One of the most viral posts generated by artificial intelligence was the one claiming the famous musician Taylor Swift, as well as her followers (the so-called Swifties), supported Donald Trump for president. After that announcement, Taylor Swift herself denied the information and publicly supported Kamala Harris.
Fake phone calls from Joe Biden
Back in January 2024, thousands of American voters in New Hampshire received a voice call purportedly from then U.S. President Joe Biden, urging them not to go to the polls. The call turned out to be generated by artificial intelligence that mimicked Biden’s voice. The investigation led to political adviser Steve Kramer admitting to using deep fake for a fake voice recording, for which he faces a $6 million fine.
In July, Elon Musk, owner of Platform X, shared a video with Kamala Harris’ artificially generated voice in which she says things she never said.
Although logic dictates that the fraudster will not film himself performing the fraudulent act, one such video, claiming that election fraud was taking place managed to spread. A video released 24 and 25 October shows a Bucks County, Pennsylvania election employee opening envelopes with votes and destroying those with votes for Trump. The polling station immediately denied that this happened, and the design and material of the ballots also supported the fact that the video was fake.
Fake voice recording by Kamala Harris
In early October, a video a video falsely attributed to Kamala Harris, purportedly advocating for women’s reproductive rights, went viral. In the video, a boy dressed as a girl makes a mess in the house and behaves inappropriately. In the background, the voice, which closely resembles that of Kamala Harris, says: “We must protect a woman’s right to choose her own path for herself, her family, her future. So that we don’t get stuck in a life with a child like this.” The description of the video posted on X reads: “Kamala Harris’ new advertising campaign depicts a mother regretting not having had an abortion. Utterly disgusting. How can anyone vote for this?” It turned out to be an advertisement for the home insurance of the British company John Lewis & Partners from 2021, on which the candidate’s voice was inserted with a tool for generating AI content.
However, there were also cases in which candidates accused others of using AI tools when this was not the case. Thus, Donald Trump, who is otherwise inclined to exaggerate situations and make baseless accusations against opponents, posted on August 11 that presidential candidate Kamala Harris used artificial intelligence to add an audience to the photo of her election rally in Romulus, Michigan. “Did any of you notice that Kamala cheated at the airport? There was no one around the plane and she added a bunch using artificial intelligence to show off her so-called followers, but they didn’t exist”, Trump wrote on the Truth Social social network, owned by him. “No one was waiting for her on the runway, and it looked like 10,000 people were there. Here, look, we caught her with a fake crowd. There was no one there”, he added. However, numerous videos, as well as photos of major services such as Getty and The Associated Press, show that a large number of people were waiting for her at the airport and at the venue. Kamala Harris’ team estimated that about 15,000 people had gathered.
Throughout the spring and summer, PolitiFact, a non-profit fact-checking organization affiliated with the Poynter Institute, uncovered numerous profiles and posts that used AI-generated voice messages to spread false narratives about both presidential candidates. The videos were most often disseminated through TikTok and YouTube, but they were also shared on Facebook and X, with fewer reactions. In one example, a voice told voters that Donald Trump had withdrawn from the presidential race, while other videos spread fake news about the health of high-profile politicians.
Trump spread misinformation about foreigners during the debate
Another example of the false narrative that has spread through social media relates to misinformation about foreigners in the U.S., a topic imposed in the campaign by Donald Trump’s promise to carry out a major deportation of illegal migrants. Completely unfounded and without evidence, rumours spread that migrants were eating pets in the city of Springfield, Ohio. In September, Senator JD Vance, the now-elected new Vice President of the United States, continued to spread the same narrative, while Donald Trump repeated the same thing on 10 September in a presidential debate in front of 67 million viewers. Despite a lack of evidence, AI-generated images of Trump as a pet protector went viral, leading to a wave of cat-themed memes.
Immediately after the 6 November election, a video of President-elect Donald Trump promising the former Pakistani prime minister to help him regain power spread via Facebook, and then TikTok. “Greetings to my Pakistani-American friends. I promise, if I win, I will do everything to help Imran Khan get out of prison as soon as possible. He’s my friend, I love him. I will support him to take back power”, Trump reportedly said. There is no evidence or record that Trump ever said this, and using the reverse search tool for photos and videos, it was revealed that it was actually Trump’s interview he gave to NBC News in 2017 and that the audio was subsequently added using artificial intelligence.
Back in July, the son of U.S. President-elect Donald Trump Jr. announced on X that major tech companies Google and Meta were interfering in the elections, manipulating and hiding search results about Trump. And Elon Musk accused Google of not providing coherent results and hiding “real news” about the 13 July shooting of Trump in Butler, Pennsylvania. Both Google and Meta have denied these allegations, stating that they intervened and corrected some errors in AI tools, but pointed out that the errors had to do with automatic search rather than political bias. Also, after the U.S. presidential election, Faktograf analysed the allegation that the video proves that Google’s search engine hid the locations of polling stations from Trump’s voters, which turned out to be incorrect.
Since the U.S. elections were closely monitored worldwide, disinformation globally, including on social media profiles of Croatian users, so, among other disinformation about the U.S. elections we dealt with, we detected an AI-generated recording of Ukrainian President Volodomir Zelensky, who was alleged to be crying over Trump’s victory. The template on the basis of which this recording was made had nothing to do with the U.S. presidential election because it was filmed in early April 2022 in the city of Buca, one of the first war crime scenes in Ukraine.
Three ways AI can influence elections
Grinnell College professor of political science Barbara Trish wrote an analysis for The Conversation. Although she says that artificial intelligence as an innovative technology has a lot of potential to manipulate and spread lies, we have already seen most of the AI models used in this U.S. election. Trish identified a number of ways AI can influence elections:
1. Voter information
Instead of searching on search engines such as Google or Bing, with the advent of artificial intelligence, some will turn to interactive communication systems that mimic natural language (chatbots) for information. However, some chatbots refuse to answer political questions, some answer correctly, and some provide false information. Experts advise to verify the responses of chatbots.
2. Deepfake
Deepfake means edited photos, audio or video created using AI intended to mimic reality. In fact, they are convincing versions of altered images, i.e. video and audio, using tools such as Photoshop and other video editing software.
The project of monitoring the influence of artificial intelligence on the 2024 elections by the Wired magazine showed that deepfakes did not take hold, but they were still used by candidates of both political spectrum, for various purposes, including misleading voters.
3. Influence of foreign countries on elections
Russia’s confirmed involvement in the 2016 elections emphasized the importance of monitoring foreign interference in U.S. politics, whether it is Russia or any other country. In July, the Ministry of Justice seized two domains and searched about a thousand profiles used by Russian persons for a so-called “bot farm”, and it is assumed that the matter would become even simpler with the help of artificial intelligence. They found that Chinese users were spreading misinformation about the U.S., and in one post, Joe Biden’s speech was transformed in a manner that it seemed like he was using sexual references.
Barbara Trish points out that in political campaigns, AI tools can also be used to provoke on the Internet to see how a candidate is perceived in certain social or economic circles, to research opinions or to compile posts specific to a specific target group.
Citizens’ concerns about AI
That citizens were concerned about the impact of artificial intelligence on the campaign, according to a survey by the non-profit U.S. think tank Pew Research Center published in September this year. According to their poll, more than half of Americans, regardless of the political option they lean toward, were concerned about the impact of AI on the 2024 presidential election. In addition, they did not trust the big tech companies (Facebook, X, TikTok and Google) and their promises to prevent misuse of platforms to spread election disinformation. On the other hand, they believed that the responsibility for solving the problem lies with the platforms.
Almost simultaneously, the Alan Turing Institute published research that showed that they found no evidence that artificial intelligence and disinformation influenced elections that were held in Europe, prior to the U.S. ones. They identified only 16 confirmed viral posts containing disinformation created by artificial intelligence or deepfake during the UK general elections, and only 11 viral posts related to the EU and France elections.
However, researchers expressed concerns about realistic parodies and satirical deepfake videos that, while intended to be humorous, could easily be mistaken for factual content. In today’s world, this poses a major challenge to platforms and regulators who must carefully balance the fight against disinformation while defending freedom of speech and recognizing the benefits of satire in political discourse. Research by the Alan Turing Institute also found evidence that voters mixed legitimate political content with AI-generated material, which could adversely affect public trust in online information in general, not just in the context of elections.
A group of scientists from Purdue University has compiled a database that lists social media posts that have to do with artificial intelligence. They found that most videos were not created with the intention of misleading users, but to entertain. Many of the videos are satirical. Christina Walker, PhD student from Purdue University, pointed out that when sharing an artificially generated video for the tenth time, the context is very easily lost and the post is necessarily no longer a satire, but users can experience it as a real event.
AI tools for political campaigns
A number of startups have also appeared, offering AI tools for political campaigns. These include, for example, Battle Ground AI, which can write original text for a political ad in just a minute, or Grow Progress, a tool that helps simplify persuasion tactics and convey messages to voters. The co-founder of the latter company, Josh Berezin, told Time that dozens of political advisers were trying out their advertising tool this year, but the response was low. The New York Times reported in August that only a handful of candidates used AI tools, but mostly tried to hide the move from the public.
AI tools accelerate the ‘production’ of false information by using language models that make the content sound as human-like as possible. In addition, using artificial intelligence tools, realistic images, videos and audio recordings can be created in a very short time without much prior knowledge and without much expertise.
In addition to warning about the negative consequences and the impact of AI on the elections, experts also point out the positive sides. Artificial intelligence can serve as a valuable tool in elections, assisting with tasks like automatic signature verification, message creation, and quicker collection of data or funds. However, deepfakes will become more compelling over time, so new ideas for using AI will be needed.
Artificially generated profiles
Elise Thomas, an Australian political science and linguistics scientist and analyst, published the results of her own research on social media profiles on X. The results showed that a number of profiles on the X started publishing AI-generated content this year to help Donald Trump.
She found that the profiles that publish such content are partly profiles of real people and partly of artificial intelligence, Deutsche Welle reports. Even the “blue check mark” (authentication mark) did not prove to be immune to spam profiles. Part of the profiles had only been created in June 2024. She also revealed the method she used to discover that part of the profiles were artificial intelligence – some she “talked” to, some revealed themselves by giving answers to questions that “they are an AI model, which is why they cannot express political views”, some were confused and did not know the answers to questions, and some contradicted themselves, so they wrote in one post that they supported Trump, and in another that they do not agree with his policies.
Scientists from the Clemson University published on 30 September a study that revealed an army of artificially generated profiles with political propaganda, posing as real people. These include at least 686 profiles where artificial intelligence answers questions from the profiles of real people using language models, and Republican and Democratic candidates were equally targeted.
The U.S. Department of Homeland Security released an annual threat assessment report in September that detected increased activity from Russia, Iran and China in an attempt to influence the U.S. election, including the spread of fake or manipulative news. They found that Russian “influencer actors” were massively sharing unconfirmed stories about migrants in America to increase social strife. In addition to the fact that such content is mainly generated by artificial intelligence, they also discovered entire sites created by AI, which act as legitimate news portals.
2024 without significant AI impact
Information experts claim that, nevertheless, 2024 was not a watershed year for the use of artificial intelligence in politics. “Numerous campaigns and organizations use AI in some way. In my opinion, it didn’t reach the level of impact that people would fear”, said IT expert Betsy Hoover.
At the same time, experts warn that the full impact of AI on this election cycle remains unclear, especially as its use on private profiles continues to grow. They also point out that while AI’s impact on these elections seems weaker than expected, it should not be overlooked that this can be amplified significantly in the next elections due to advances in technology and the increasing use of AI by the general public and among politicians. They believe that AI models will improve significantly in the next year or two and that they will play a much larger role in the U.S. elections in 2026 or 2028.
To make things better, research published back in January 2023 in the Nature journal shows that political persuasion has no effect because voters mostly have an aversion to political messages tailored to them. The survey included a large number of undecided or poorly informed voters, about two million of them, who were served political advertising campaigns through social networks during eight months, but it turned out that political advertisements had no impact on voter turnout or determination.
What will Trump’s policies be?
In this campaign, neither Kamala Harris nor Donald Trump paid much attention to the topic of artificial intelligence. Moreover, artificial intelligence (AI) was first mentioned in the official campaign by presidential candidates only on 10 September in a debate, but it seems that artificial intelligence has already entered our lives so much that no one even flinched, writes Scientific American.
Candidates have positioned themselves differently on the topic of AI. Kamala Harris spoke more openly about this, outlining specific policy steps to protect vulnerable groups from potential harm. In an interview with Fox earlier this year, Donald Trump expressed mild resignation, calling AI “perhaps the greatest danger” because there is no “right solution” for it.
The election of the U.S. president will certainly affect the creation of policies on artificial intelligence in the next four years. President-elect Donald Trump mentioned a reduction in the number of laws, and his priority in the context of artificial intelligence is military application and national security.
Had Kamala Harris been elected U.S. president, she would have continued Biden’s AI policies. In October last year, the Biden administration issued guidelines for the safe management of artificial intelligence, while the draft Law on Artificial Intelligence emphasizes that AI models must be safe and efficient, that algorithms must not discriminate, that AI must be an option, that user data must be secure, and when talking to artificial intelligence, users must be warned. How the new administration will respond to the challenges of AI, whose content is expanding, remains to be seen.