
Many Employers believe that the continued evolution of AI offers opportunities to support all parts of their business and improve productivity (and reduce costs!).
AI is already being used in many work areas including:
- Human Resources – in Recruitment (to devise job adverts, source candidates and screen CVs, perform assessments) – see ‘Top Tips for using AI in Recruitment’ from Lawyers Lewis Silkin); to summarise Performance Reviews and Development plans/performance management; in task allocation and scheduling shifts; in surveillance and monitoring of the workforce, tracking workers to monitor productivity or health and safety.
- Jobs that are currently data intensive or admin or research based (including many graduate entry jobs)
- Customer Services/AI assistants
- Language/Translation
- In sales, service and the marketing industries– using generative AI tools that writes text or generates images.
However, there are risks in using AI – the loss of jobs is a major concern (although some argue that new jobs will be created to use the technology effectively). The main risk for businesses using AI is that its data can be inaccurate because of its ability to:
- Produce algorithmic bias;
- Produce inaccuracies/fabrication of facts
- perpetuate existing prejudices in the data
- produce plagiarism and mimicry (of others’ content)
Inaccurate data can bring legal issues and ethical concerns, and it also raises important data protection, intellectual property and copyright protection concerns. Employers need to consider how much they want to allow employees to use AI in their day-to-day work and what safeguards need to be put in place to ensure the work is in context and accurate/relevant.
Here, we look in a bit more detail about how AI’s capability to analyse vast amounts of data poses significant Data Protection privacy concerns, as the collection and analysis of personal data can infringe on an individual’s privacy/current Data Protection Regulations.
Many Employers will have professional/industry bodies who have already issued guidance on the use of AI, and these should be followed. The UK’s official Regulators have been asked by the Government to issue best practice guidance for businesses. For example:
- Ofcom (who regulate Communications, including the TV and Radio Industry) published a Note to Broadcasters in 2023 to clarify that the Broadcasters must be accountable for their use of synthetic media generated by AI under the existing Ofcom Broadcasting Code.
- The UK’s Financial Conduct Authority (FCA) has an AI-Lab.
- The Competition and Markets Authority (CMA) has released its AI strategic update.
- The Equality and Human Rights Commission (EHRC) has its AI guidance here.
Existing and new guidelines will continue to evolve, as will each Company’s own guidelines. Continuing collaboration between regulators, technology companies and industry bodies will be essential to strike a balance between innovation and protecting personal data with ethical safeguards.
Employers who use AI in their business will need to publish general AI principles and guidance for its use, starting with the following points (to be expanded upon depending on the relevant issues in the business/industry etc.):
General Business AI Principles:
- Senior managers should consider all legal, data protection, intellectual property commercial affairs and information security issues about using AI in the business.
- Acas advise that Employers should develop clear policies regarding the use of AI in the workplace and should consult employees and any representatives on its introduction, highlighting how it can improve employees’ roles.
- AI can enhance and support the Company’s work but it cannot make complex decisions and will not replace human judgement and critical thinking.
- AI is only useful if the Company understands what it’s doing and the results it’s showing. A businesses’ use of AI should not contravene any laws or regulations, or create any content that is misinformed or untrustworthy. AI outputs need to be checked for accuracy, tone and bias and staff trained to get the best outputs.
Data Protection Issues with AI:
All AI models use vast amounts of personal and behavioural data harvested from (mostly) the public domain to generate content. Often, however, this data is obtained without explicit, freely-given, user consent. Under the UK’s current data protection laws (the UK-GDPR), businesses must handle personal data and sensitive data (that is collected or generated) responsibly, but this is made difficult when AI models use and reuse data and combine data in complex ways that can obscure its original content, and which creates risks in the use of that data, as the business may not know how the model was trained/where the data was from.
This raises questions as to whether it is fair or legal to use this data from AI?
In addition, the UK’s current data privacy laws were written before many AI tools existed, so legal guidance is still evolving. However, UK and EU regulators are now taking a clear stance that AI outputs are subject to the GDPR where processing includes content that may include personal data.
The current UK-GDPR has many principles, that businesses must adhere to, including:
- data minimisation (not using an excessive amount of personal data, which can be difficult where AI data sets are very large and you may not know if personal data is involved)
- purpose limitation (using personal data for only the purpose for which it was collected)
- privacy (data subjects have given consent to the use of their data or the data has been anonymised; ensuring the accuracy of the data before use; ensuring the security of the data; erasing personal data when the original purpose of collection has been achieved).
- legal basis (so businesses should only use personal data if they have a legal basis to do so or clear explicit consent) and
- transparency (informing data subjects about how and why their personal data is being used in a way they understand).
One way for businesses to manage any potential data protection risk of using AI is for the business to carry out a data protection impact assessment (DPIA) before implementing or using an AI system. This should be done to consider the risks to Data Subjects and the impact that the processing will have on individuals
The risk assessment should consider the nature, scope, context and purposes of the processing a business intends to do, and the risks this poses to individuals’ rights and freedoms. This will help the business to determine the measures they need to put in place to ensure their data processing complies with their data protection obligations.
The risk assessment should also ensure the business understands any AI model they use, so that they can identify, diagnose and resolve issues such as undetected biases and ethical implications. This means they can minimise these risks by adopting ethical AI principles, including strict testing, and transparency, and improving staff user awareness and vigilance.
For example – as AI retains all the data that is put into it, this can cause a problem where it is being fed confidential information (e.g. from HR processes, such as writing/responding to a grievance/disciplinary or a redundancy consultation which includes personal data about other employees or confidential information about the business). In these situations, AI will retain all the data/information and it could reuse it elsewhere causing huge problems for privacy and confidentiality.
The risk assessment should also consider long-term data storage of data from AI models and how the business mitigates against unauthorised access or misuse of that data and how they delete data when it is no longer required or when a data subject exercises their right to stop us processing their data.
Guidance from the ICO (the Information Commissioners Office) about impact assessments is here.
The ICO has also produced Guidance on AI and Data Protection.
And there is further reading from the ICO on the Accountability and Governance implications of AI; and a data protection risk tool-kit here.
Other AI/Data Protection information:
The House of Commons Library has a 2023 research briefing called ‘Artifical Intelligence and Employment Law’ which is a really interesting introductory read to AI in the workplace.
In June 2025 the UK government succeeded in getting the new Data (Use and Access) Bill through parliament. The new law is now awaiting Royal Assent and will become law in the near future when further regulations are published. This reforms some of the existing data protection framework in the UK, including data subjects’ rights where AI is used. An interim summary of the changes to existing UK data protection law is explained here.
The UK Government’s current AI Principles are safety, security and robustness, appropriate transparency and explainability; fairness; accountability and governance; contestability and redress:
Government’s AI principles | High level summary |
Safety, security and robustness | AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed. |
Appropriate transparency and explainability | AI systems should be appropriately transparent and explainable. |
Fairness | AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider descriptions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law. |
Accountability and governance | Governance measures should be put in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle. |
Contestability and redress | Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm. |
The European Union (EU) introduced the Artificial Intelligence Act (‘AI Act’) in August 2024, The Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. The Act entered into force on 1 August 2024, and will be fully applicable 2 years later on 2 August 2026, with some exceptions. Details are here.