AI and Workers: Artificially Managed, Actually Exploited

Michael Dziedzic/Unsplash

Original article (in Croatian) was published on 05/02/2025; Author: Anja Vladisavljević

In the age of artificial intelligence, the enormous burden of technological progress is borne by workers, who are increasingly insecure and subjectable to control.

“AI-driven management is already intensifying pressure on 427 million workers worldwide,” warned the International Trade Union Confederation (ITUC), which called for urgent measures to protect workers’ lives and rights in the age of digitalization and artificial intelligence (AI) on the occasion of International Workers’ Memorial Day, 28 April.

On that April day, they reported to have remembered the dead workers and promised to continue fighting for the living. “Technology should work for us, not against us,” they wrote.

One of the most brutal examples of technology turning against workers is the case of Filipino Jasper Dalman, a Foodpanda platform courier who died in a terrible car accident in 2023 while doing his job. His death, according to the ITUC and the Philippine trade union of which Dalman was a member, showed “the deadly consequences of algorithmic exploitation that set impossible productivity targets”.

The tragic case from the Philippines is not isolated. The application of new technologies, the ITUC points out, without proper consultation with workers and their trade unions, is already causing serious violations of workers’ rights. Algorithmic management and the use of AI expose workers to precarious conditions, undermine workplace safety, undermine hard-won labour rights and discriminate against workers.

While May Day beans, distributed at gatherings, in squares and parks, are still being digested, it may be a good time to ask ourselves what it means to be a worker in the era of digitization.

Algorithm the boss

As ITUC points out in its new report (“Artificial intelligence and digitalization: A matter of life and death for workers”), with the development of technology, a multitude of jobs and modes of operation appeared that did not exist before: app-controlled couriers and drivers, content moderators, big data analysts and machine learning engineers.

While in traditional industries and industries there is a boss, manager, supervisor, a line of command and a union that oversee working conditions, in these newer occupations these functions are replaced (or completely abolished) by applications and digital tools.

Platform work is the most obvious example of how technology can be (mis)used against workers. Under the guise of flexibility and simplicity, digital platforms – applications for transport, delivery and other services – use algorithmic management to control workers, while offering them the minimum of the protection they have in traditional employment.

Algorithmic management involves continuous monitoring of workers’ behaviour through devices (e.g. mobile phone) and applications and continuous performance assessment based on collected data (user ratings, task execution speed, number of rejected orders).

Due to constant supervision, workers are under pressure to maintain or increase their efficiency, although very often they do not even know how the algorithms work. At the same time, they do not have much room to complain or seek feedback when faced with unfair treatment, because these are automated ratings and decisions, so they are not communicated with by a living person.

The range of industries where algorithmic management is used is expanding. One of the most famous examples is the Amazon online store where workers are monitored with special scanners to measure their productivity and record errors. In call centres, workers are also supervised by the machine, and some of such software can even automatically analyse and evaluate what workers are saying, what phrases they are using, and whether the mood during the call was “negative” or “positive.”

In the US, nurses are increasingly working through platforms (which connect them to hospitals and healthcare facilities) and are struggling with similar problems as Uber workers. According to a study by the Roosevelt Institute, there are automated metrics in the world of gig nursing that assess their reliability based on the number of shifts they complete, how early they cancel shifts, and whether they stay late at work. Women workers are offered different shifts, often for different amounts of “wages”, and they are often assigned to institutions where they have not worked before.

Algorithm the recruiter

Similar to the above cases, where the technology of supervision of workers is justified using the argument of time and resource effectiveness, it increasingly happens that workers are “scanned” even before they end up in a workplace. Artificial intelligence is also being integrated into recruitment processes and is displacing recruiters who have otherwise reviewed resumes, conducted interviews and made judgments based on experience and impression.

AI-driven tools address the logistics of selecting and scheduling interviews, eliminating time-consuming administrative work. AI can also analyse vast amounts of workforce data to anticipate a particular company’s future hiring needs and identify “ideal candidates” based on the company’s profile.

It can review the candidate’s CV, motivation letter, work experience, but also profiles on social networks, which calls into question their privacy and data security. If job interviews are conducted via a video link, which is no longer rare, facial recognition and voice analysis algorithms can evaluate responses, body language, word choice, and candidate tone, evaluating them against a predefined metric.

As analysed by the American Civil Liberties Union (ACLU), many of these tools pose a huge risk of exacerbating existing workplace discrimination based on race, gender, disability, and many other characteristics, despite marketing claims that they are objective and less discriminatory.

AI tools are trained with a large amount of data and make predictions about future outcomes based on correlations and patterns in that data. The tools used by a particular company can be “fed” with data on the employer’s own workforce and previous recruitment processes, so they can automatically eliminate certain social groups.

For example, if the system is trained on data from a company that employed more men than women, the tool could favour male candidates. Such favouritism can also be achieved in a very banal way, as can be seen in the case described by the BBC.

In one company, an AI resume checker was trained on the resumes of past workers, giving candidates extra credits if they listed baseball or basketball as hobbies because they were associated with “more successful staff,” often men. Candidates who mentioned softball – usually women – were demoted. Another example of bias singled out by the BBC is the case of a candidate who passed the selection process after changing his date of birth on his CV to make himself appear younger (before that change, and with the same application, he was excluded).

In some cases, the BBC explains, the biased selection criteria are clear – such as age discrimination or sexism – but in some they can be very blurry. As in the aforementioned cases of algorithmic management of workers, the lack of transparency is problematic, especially when candidates are rejected without knowing for what reason.

Does the worker help the AI or the AI worker?

At the same time, the potential positive impacts that artificial intelligence can have in the world of labour should be taken into account. As we read in various online sources (12), and this is also not missed by workers’ rights organisations, AI can automate repetitive tasks, increase efficiency and reduce operating costs – freeing workers from monotonous duties and allowing them to focus on more creative work.

But since we are enumerating pros and cons as if we were at school, we cannot omit the most obvious con. One of the primary concerns about AI is the potential loss of jobs. As AI systems become increasingly capable of performing tasks traditionally performed by humans, there is an understandable fear that some occupations will become obsolete.

According to a poll onducted among American workers in January this year, there is widespread recognition of the impact of artificial intelligence on job losses, with 43 percent personally knowing someone who has lost their job due to artificial intelligence, and 89 percent expressing concern about their own job loss.

In September 2023, the first survey on AI perception in Croatia was conducted. For 60 percent of respondents, AI represents a sense of uncertainty or concern, while 30 percent find it useful.

However, in parallel, a perspective emerges that questions the notion of artificial intelligence exclusively as a job eliminator. In some cases, workers retain their jobs, but, contrary to the promise, the implementation of artificial intelligence brings them an increase in workload.

As we wrote earlier, such burdens are felt by workers in the field of culture. Literary translators, for example, are already receiving queries for the so-called post-edit, i.e. editing translations generated by artificial intelligence. This gives them a lot more work because they have to fix errors caused by machine translation.

In journalism, things are similar. If a journalist searches for answers via ChatGPT, they definitely have to go through several more rounds of verification, because this tool, unless well trained, does not distinguish between reliable and unreliable information, and often does not reveal where it got its source from. In principle, it is still difficult to leave AI to perform tasks without human supervision.

After all, the point of some jobs is the growth and development of artificial intelligence. We do not mean IT professionals, but “data workers” who, in stressful and precarious conditions and for low fees, collect, process, recognize, mark, moderate or enter data for the purpose of training and functioning of artificial intelligence systems.

Among them are moderators of social media content who are constantly exposed to toxic and disturbing content, in order to once train artificial intelligence to become more skilled in recognizing this harmful content.

Technology for the workers’ struggle

Finally, let’s return to the ITUC’s message, that technology should work for the workers, not against them. In addition to traditional actions, strikes and protests, which warn of the erosion of workers’ rights through technology and advocate for better legislation, workers and trade unions are considering using AI and digital tools to protect and improve workers’ rights.

As automation in the workplace poses a significant challenge for recruiting workers into the union – as many workers work remotely, through apps, and in short-term jobs – unions must also embrace digital tools to improve their outreach and advocacy.

The United Nations University (UNU) sees technological adaptation as part of the solution. Virtual platforms can connect union representatives with geographically dispersed members, and social media campaigns and targeted digital advertising can raise awareness of union benefits and workers’ rights in AI-driven workplaces.

“The answer to the challenges posed by automation lies in adaptation and innovation. Trade unions must embrace the same technologies that are disrupting the workplace. AI-powered chatbots can facilitate member recruitment and engagement, even reaching workers in traditionally difficult-to-organize sectors like domestic work,” suggests the United Nations University.

Indeed, some have already taken steps. WAO (Workers’ Algorithm Observatory), an initiative from Princeton University that assists workers and their allies in the research and monitoring of algorithmic systems in platform work, has developed tools to explore how algorithmic management affects workers. Through tools such as FairFare and the Shipt calculator, WAO allows workers to anonymously share information about their experiences, thus detecting irregularities in algorithmic wage and working conditions determination.

There are similar ideas on the domestic trade union scene. As Tomislav Kiš from Novi sindikat explained for the European AI & Society Fund, the goal of his organization is to organize domestic and foreign platform workers to pressure the Government into changing existing laws or create new legal solutions.

“To achieve this, we need to strengthen the capacity of workers to organize and express their demands, whether through advocacy or material support. One of our ideas is to develop our own app to control and verify the data managed by the platforms. This would be a first step towards gaining insight into how platforms work and, consequently, limiting the power they develop based on the data collected. Having accurate data would strengthen the bargaining power of trade unions in regulating mutual relations,” Kiš said.

It’s not unusual or unnecessary for much of the public discussion about AI to focus on how it threatens jobs and deepens inequality, but it may be time to change the narrative.

If algorithms can track and control workers, they can also be repurposed to protect them. As we often hear when we talk about artificial intelligence, technology itself is not the problem, the problem is how we use it.

Follow us on social media:

Contact: