Artificial Intelligence for Workers, Not Just for Profit: Ensuring Quality Jobs in the Digital Age

Adopted at the Executive Committee meeting of 04-05 March 2025

Artificial Intelligence (AI) is reshaping the world of work, profoundly altering job structures, competences recognition, and workplace conditions.AI is neither good or bad for workers, it depends on how it is being used and by whom it is being controlled. Such a technology, even if it increases productivity, does not necessarily increase the wellbeing of the majority of workers. While AI promises efficiency gains and has the potential to improve the quality of work, it also threatens the value of human labour, widens inequalities, and creates significant challenges regarding fundamental rights, wages, occupational health, safety, and fairness. The effects of AI depends on the political choices made around it: legislators to frame the technology to deliver for the general interest, trade unions to protect working conditions, public investment in education and public and private investment in infrastructure, enforcing fair competition rules and tax redistributing gains, as well developed by the Nobel Prize in Economics Daron Acemoglu.

The ETUC's paper presented and discussed at the ETUC strategic discussion on AI at work in December 2024 confirmed the ETUC's demands but also identified the following new aspects and main challenges:

  • Impact on Jobs and Wages: AI-driven automation and generative AI may cause, without appropriate regulations, job displacement, deskilling, and precarious employment, threatening wages and job autonomy. The European Commission must introduce protections to ensure decent working conditions.

  • AI-Induced Discrimination: If not implemented responsibly, AI can reinforce societal biases, worsening inequalities in hiring, access to training, promotion, and dismissals. The lack of contextual understanding and empathy limits the ability of systems to handle sensitive human interactions. Strong enforcement of anti-discrimination laws is crucial.

  • Data Privacy and Surveillance: AI-driven workplace surveillance raises privacy concerns. Despite GDPR, enforcement gaps remain. Workers and their representatives  must have control over data collection and usage.

  • Ethical AI and Human Rights: AI must follow strict ethical norms to ensure transparency and accountability. The AI Act lacks workplace-specific protections, which must be addressed to safeguard workers' rights.

  • Training and AI Literacy: Workers need lifelong AI-related training. Employers must be legally required to provide relevant training and involve trade unions in joint training for the upskilling.

  • Worker-led innovation: Worker voice and experience must be embedded in all stages of the AI value chain. This requires not only collaborative innovation at the workplace level but also securing worker influence at the Research and Development stage and gaining a seat at the table in industrial strategy discussions. Trade unions must be empowered to negotiate AI deployment strategies that enhance job quality and productivity while ensuring fairness, worker autonomy, and collective decision-making.

The ETUC calls on the EU institutions to take immediate and decisive actions in the following areas to ensure that AI delivers for all: 

  1. A Directive on Algorithmic Systems in the Workplace

ETUC reiterates its 2022 call for a dedicated directive on algorithmic systems at work. The ETUC considers the Platform Work Directive’s chapter on algorithmic management a strong foundation for a broader directive covering all workplaces, yet its scope needs to be clarified to ensure that a minimum standard is applied to all workers. Furthermore, this directive must ensure human oversight of all AI-driven decision-making processes, reinforcing the human-in-control principle. Workers and their representatives must have the right to challenge and overturn AI decisions that impact their employment or well-being. Employers must be obligated to involve trade unions and worker representatives in AI-related workplace decisions, and ensure the possibility for negotiated solutions. AI-driven monitoring must be transparent and non-intrusive, allowing workers and their representatives full access to relevant information to allow meaningful consultation Workers must be able to contest automated decisions and demand human intervention. AI must not process non-work-related personal and psychological data, and strict safeguards must be implemented to protect occupational health and safety.

  1. AI Liability

The ETUC strongly condemns the European Commission’s announcement on 12 Feb. 2025  in its work programme 2025 to withdraw the AI Liability Directive, a move that leaves workers in danger facing the risks posed by artificial intelligence in the workplace. Without clear liability rules, those affected by AI-driven decisions, whether through workplace accidents, unfair dismissals, or biased hiring algorithms, to name but a few, will face greater difficulty in seeking redress. The lack of clear regulation also risks the potential liability to fall on workers operating AI-systems, rather than companies developing and deciding on their implementation and risk management. The AI Liability Directive was meant to establish accountability for employers when AI causes harm. Its withdrawal raises concerns about ensuring a fair balance between big tech and corporate interests and workers' protection. 

The ETUC calls the Commission to take the precautionary principle seriously and to reconsider its decision and uphold its duty to protect workers, not abandon them to corporate-driven AI policies. The ETUC demands binding rules ensuring that business must be held accountable of adverse impacts when AI is introduced and used in relation to work. Workers’ rights to fair treatment,  decent work and privacy must be upheld, this is not negotiable. Labour is not a commodity.

  1. Regulation of Global Digital Value Chains

AI's value chains rely on labour-intensive data preparation, often outsourced under poor working conditions in the Global South, but also in Europe. Workers in data labelling and content moderation are exposed to severe exploitative conditions, including low wages, excessive monitoring, and exposure to distressing content. Many AI training datasets rely on underpaid or informal labour, exacerbating inequalities. The ETUC calls on the EU institutions to ensure that the Corporate Sustainability Due Diligence Directive (CS3D) and Corporate Sustainability Reporting Directive (CSRD) maintain and enforce corporate accountability and in particular do not diminish the civil liability included in the CS3D. It should  secure fair labour norms in AI development, preventing exploitation and ensuring ethical sourcing of AI-related labour. Similarly, the treatment of data should always meet the standard set in the European laws, not allowing outsourcing the data treatment to jurisdictions with lower labour and social standards.

  1. Copyright Protection and AI’s Impact on Creative Work

AI models rely on vast amounts of professional and personal data, which raise concerns about copyright infringements, in particular in creative sectors. The ETUC calls the EU institutions to establish a system based on informed consent, transparency, and fair remuneration for creators. Strengthened GDPR enforcement is needed to prevent unauthorized replication of creators’ image, voice, and likeness. 

  1. Environmental Impact of AI

AI technologies require significant use of energy and natural resources, which can undermine climate gains and endanger climate goals. Training large AI models can consume as much electricity as powering entire countries, with estimates showing that a single AI model can generate carbon emissions equivalent to five cars over their lifetime. It's crucial to ensure that this growing energy demand is met with new renewable energy sources to maintain environmental sustainability. Data centers consume vast amounts of water for cooling, threatening water supply in regions already facing scarcity. AI relies on chips and their production could be challenging in term of climate goals due to high energy consumption, challenges in waste recycling, and extensive water usage.  Additionally, while the EU is implementing policies to phase out fossil fuels, AI's growing carbon footprint could neutralize those gains, making job losses in carbon-intensive industries seem futile. The ETUC calls the EU institutions to request business to report on mandatory transparency criteria of their AI’s environmental footprint, to enforce sustainability norms, and to ensure that AI adoption does not undermine or annul existing climate policies. Companies using AI must commit to reducing energy consumption and offsetting environmental damage.

  1. Tackling AI Monopolies and Ensuring European AI Sovereignty

The AI market is increasingly dominated by a handful of Big Tech corporations, primarily from the US and China, posing significant risks to European data sovereignty, technological independence, and fair competition and impair the development of the European strategic autonomy. To address these challenges, the ETUC calls the EU institutions to strengthen antitrust laws and introduce strict regulations to prevent market concentration in AI services and infrastructure. Such regulations  should foster an environment where European AI enterprises can compete through public funding, innovation grants, and legal protections in particular for workers. A fair taxation system should be supported to redistribute AI-generated wealth, funding public digital infrastructure and worker protection, in particular in raising wages. Open-source and decentralized AI models should be encouraged to provide alternatives to copyright protected AI, ensuring fair access to AI-driven innovation. Additionally, the ETUC calls the EU institutions to use in full the Digital Service Act and Digital Market Act and to maintain regulatory integrity against lobbying efforts that could undermine EU regulatory frameworks.

  1. Investing in a European AI Infrastructure

Europe must assert its leadership in ethical AI development by investing in sovereign AI infrastructure. Fiber and networks are crucial for gigabit connectivity, but they reach only 64% of EU households, compared to over 99% in Japan and South Korea. Achieving connectivity targets requires an estimated investment of EUR 200 billion. Public investment in European AI projects should prioritise worker-centric and human-rights-compliant technologies. A European AI strategy aligned with fundamental labour rights is essential for fostering technological autonomy while ensuring AI serves society rather than corporate profit. The EU should also consider developing a public digital infrastructure, drawing inspiration from public broadcasters. For example, a publicly funded social media platform or AI development centre could ensure fairer access to information, uphold democratic values, and prevent market monopolization by Big Tech firms. Furthermore, the ETUC demands that public  data are stored on European public cloud services to ensure security and independence. Public initiatives in AI could create an ecosystem that fosters innovation while safeguarding and promoting workers' rights and privacy.

The rapid deployment of AI in workplaces presents a profound challenge to workers’ rights, employment security, and social justice. The European legislator must act decisively to safeguard workers by ensuring strong regulatory frameworks that promote fair AI deployment, protect data privacy, and uphold ethical AI practices. The ETUC calls for robust legislative action, including a dedicated directive on algorithmic systems at work, measures to counter AI monopolies, and substantial investment in ethical, worker-centred AI development. To enforce the legislative framework on AI, the ETUC underlines  that Member States and the EU have to strengthen the public regulators by ensuring that they are well equipped with resourced and qualified staff. Additionally, it is crucial to strengthen collective bargaining at all levels to ensure that AI deployment aligns with fair labour practices and workers' rights and is tailored to sectors specific needs. The EU cross sectoral and sectoral social partner agreements on digitalisation including AI are good examples how social partners can shape the digital translation to improve the quality of working life for everyone, as long as there is the political will of the institutions and social partners to act in this direction.