5 considerations for employers when implementing AI technologies in the workplace

Whilst the use of AI undoubtedly brings with it many benefits, which are already being realised in the workplace, improper use or inadequate risk mitigation can result in a detrimental impact on the workforce. In this article we address five key factors employers should be considering when implementing such technologies in the workplace to reduce risk and get the most out of these technologies.

  1. Unconscious bias

AI technologies are derived from humans who may unconsciously hardwire their own biases into the tools they help to create. Primarily, there are two potential sources of a discriminatory outcome: (1) the algorithms underpinning the AI technology that are programmed by the software engineer; and (2) in the case of generative AI, the data that is used to train and develop these algorithms.

For example, AI designed to analyse and evaluate interviews through facial, body or speech recognition technology may make adverse inferences and deny employment to candidates who are disabled or neurodiverse. An algorithm may not be sophisticated enough to understand that a candidate is not able to make consistent eye contact, emulate typical facial expressions, speak clearly, pronounce certain sounds or make certain gestures because of for example a disability. It may therefore conclude that a candidate has not engaged fully or performed to the expected standard required resulting in viable candidates being excluded from the recruitment process.

In circumstances where a bias in an AI system or algorithm results in a candidate being rejected or unfairly selected due to a protected characteristic, such as disability, then the employer using that system could face claims for discrimination under the Equality Act 2010. AI is not inherently discriminatory and can dramatically increase efficiencies in the workplace, but employers relying on it during any stage of the employment relationship (not only during recruitment processes) should ensure they fully understand how the AI works to be able to make informed decisions about its use, explain and objectively justify decisions made by the AI and implement effective human-led interventions where necessary. The question of legal responsibility for any discriminatory decisions arising from AI is not yet settled law, but being able to provide precise explanations for decisions may prove critical to an employer’s ability to defend any resulting discrimination claim.

  1. Data privacy and protection

Data protection law provides protection for individuals who may be subject to automated decisions. Under Article 22 of the UK General Data Protection Regulations (GDPR) individuals have the right not to be subject to decisions based solely on automated processing. The UK government indicated in its response to its consultation “Data: a new direction” of June 2022 that it was considering reforms to Article 22 to clarify the circumstances in which it applies, and we have seen some further clarification materialise in the Data Protection and Digital Information (No.2) Bill. The new Bill, which is currently in its third reading in the House of Commons, adds an additional safeguard: “When considering whether there is meaningful human involvement in the taking of a decision, a person must consider, among other things, the extent to which the decision is reached by means of profiling.” It would appear the intention here is to ensure that human involvement is effective in practice and doesn’t continue to augment existing risks with the data driven decision making.

While the Data Protection and Digital Information Bill is not currently law, under the existing GDPR regime, employers are restricted from making solely automated decisions that have a significant impact on data subjects unless this is: (i) authorised by law; (ii) necessary for a contract; or (iii) where explicit consent was given.

Even then employers must observe safeguarding requirements to protect individuals’ interests. This means notifying the individual of the fact of automated decision making and providing an explanation as to how their data is used in the decision-making process. A human-led appeals process should also be implemented to ensure an individual can challenge the automated decision to a human overseer with a view to limiting the scope of baked-in bias and building trust in the system.

Special category personal data such as, health or biometric data, carries another layer of protection. In order to process this type of data, employers must identify a specific lawful basis and a separate condition for processing in accordance with Articles 6 and 9 GDPR.

  1. Right to respect for private and family life

All employees enjoy protection under Article 8 of the European Convention on Human Rights (ECHR) – right to respect for private and family life – when at work. Employers should be mindful that if they choose to discipline or dismiss an employee because of their wrongdoing, the nature of the evidence underpinning this decision will be interrogated. Where AI systems have been used to monitor employees in breach of Article 8 ECHR, employers may face adverse consequences. This could include a detrimental impact on employee relations or complaints being raised to the Information Commissioner’s Office.

  1. Employee wellbeing

A challenge of AI-driven “real-time” performance monitoring is that, if not limited appropriately, it can create an “always-on” workplace culture resulting in a blurring of the boundaries between work and home and employees being placed under greater stress and pressure to be seen as always available to work. This has the potential to negatively impact on employee wellbeing and in turn could have a knock-on effect on workplace culture.

  1. Mutual trust and confidence

Linked to the above is the potential for ethical concerns in terms of trust issues, deriving from the balance of power between employer and employee, to arise if automated decisions are simply rubber stamped by employers or followed without review. Where an employer is unable to justify decisions made by AI technologies, this can lead to a breakdown in trust between the employer and their employee who is impacted by such a decision.

Whilst the benefits of AI technologies are undoubtedly profound for employers, it does not come without legal risks. Employers are well-advised to stay up to date with rapidly changing technological developments but should be mindful of the need to avoid blindly adopting AI technologies in their practices. This is particularly the case given the breadth of different AI-driven solutions available, each one carrying its own unique challenges. It is critical that, before implementing AI technologies and relying on automated decisions, employers understand what the technologies do and how they operate in order to develop a tailored approach to mitigating the potential legal risks.

Next week we will look at what practical steps we recommend employers take when adopting AI technologies in the workplace.

Share this post with your LinkedIn network: https://www.linkedin.com/sharing/share-offsite/?url=https://blogs.dlapiper.com/beaware/5-considerations-for-employers-when-implementing-ai-technologies-in-the-workplace/