The union is working on new AI laws with several groups from education, tech, and the government.
The UK’s Trades Union Congress (TUC) says that new laws are needed immediately to protect workers from AI and ensure that the technology “benefits all.” The TUC is launching a new task force to deal with the “gap in legislation.” The union group says that if things don’t change quickly, the UK job market could turn into the “Wild West.” This warning comes after a group of MPs also asked for AI laws to be passed more quickly.
The new TUC task group comprises professors, lawyers, politicians, and tech experts. Its goal is to “fill the gap” in employment law; this will involve writing up new proposed law protections to make sure AI is regulated fairly at work for the benefit of both employees and employers. The group is working on a bill called the AI and Employment Bill. They plan to release their ideas early next year and start pushing to get it into UK law, which probably won’t happen until after the next general election.
It has people from techUK, the Chartered Institute of Personnel and Development, BCS, the Chartered Institute for IT, the Ada Lovelace Institute, which works on AI policy, and many unions and university institutions. Four other MPs, including Conservative David Davis, Labour’s Darren Jones and Mick Whitley, and the SNP’s Chris Stephens, will also be on the committee.
Kate Bell, who is the assistant general secretary of the TUC, and Gina Neff, who is the executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, will lead the new task group together. In a statement, the two said that the UK was “way behind the curve” when regulating AI and that UK employment law was not keeping up with the growth of new technologies. Employers didn’t know how to “fairly take advantage of the new technologies”.
AI is already widely used in many parts of the economy. For example, automated systems are used to sort through resumes and analyse biological data to figure out if a candidate is a good fit. However, the TUC says that companies often buy AI systems without fully understanding what that means for workers.
The task force is expected to build on what the TUC has already said about how employers should be protected when they use AI. These include making it so that companies have to talk to trade unions before using the most dangerous and intrusive AI. They also want all workers to have the legal right to have a person look over the choices made by AI.
The TUC has also asked the government to update the UK GDPR and its replacement, the Data Protection and Digital Information Bill, as well as the Equality Act, to protect against unfair algorithms. It hopes that all of these things will be talked about at the AI Safety Summit in November.
“AI is already making life-changing decisions about how millions of people work, such as who gets hired, how their performance is evaluated, and who gets fired,” said Bell. “But UK employment law is very out of date, leaving many workers open to being exploited and treated unfairly.”
Neff said that rules must be useful and make sure AI works for everyone. She also said about the planned summit: “AI safety isn’t just a problem for the future, and it’s not just a technical issue. Both employers and workers are facing these problems right now, and they need help from researchers, lawmakers, and civil society to build the skills they need to solve them.
How AI is used and the need to move quickly
The warnings from the TUC and the new task group come soon after the long-awaited AI regulation report from the Department of Science, Innovation, and Technology was released. It has held meetings and looked into the effects of artificial intelligence, especially generative AI like OpenAI’s ChatGPT.
In the report, the MPs say there is no need to stop making foundation AI models for the next generation, but they ask the government to speed up the process of making laws. “Without a serious, quick, and effective effort to set up the right governance frameworks and take the lead in international initiatives, other jurisdictions will get ahead of the UK, and the frameworks they set up may become the standard, even if they are less effective than what the UK can offer.”
“We urge the government to speed up and not stop setting up a governance regime for AI, including any legal measures that may be needed,” the report says as a conclusion.
Nicholas Le Riche, a partner at the law firm BDB Pitmans who is not part of the new task group, told Tech Monitor that the government only seems willing to give advice on how to use AI, not specific laws. “However, as AI becomes more and more important to our jobs, it won’t be long before we need something more concrete.”
“Transparency about how AI is used at work is important,” says Le Riche. “Because AI can be used to decide if someone gets a job or keeps their job, there will likely be calls for rules to make sure workers agree or at least be consulted with before it’s used.” “In the same way, there may need to be laws that make sure AI isn’t the only one making decisions but is always being watched over by a human manager who can fix any mistakes or possible bias.”
A spokesperson for the government told Tech Monitor that the government already has its task force to bring together the government and businesses to figure out how to use AI in a way that is safe and reliable. “AI will help the economy grow and create new well-paid jobs nationwide. It will also make our current jobs easier and safer to do,” they said. “Our pro-innovation, context-based approach to regulating AI will boost investor confidence, help create these new jobs, and allow our world-class regulators to look at any AI-related problems in the context in which they happen.”