What does AI and technology mean for the employment sphere?

When we connect artificial intelligence (AI) and technology in the workplace, often one of the first things that springs to mind is robots replacing workers and our jobs. Whether that’s a misconception remains to be seen, but one thing for sure is that in recent years we have experienced a technological revolution in the workplace accelerated by the Covid-19 pandemic and the resulting increase in homeworking. AI is no longer an abstract concept in the workplace and AI-powered technologies pertain through all stages of the employment relationship from recruitment through to dismissal, carrying huge benefits as well as risks.

An overview of AI and its uses in the workplace

Broadly speaking, AI refers to assimilation of human analytical and/or decision-making capabilities by technology and, as some would say, substituting human beings as decision makers.

Generative AI or Gen AI, with the launch of Chat GPT, has seen extraordinary media attention in the last 6 to 12 months and is undoubtedly transforming how businesses operate. In its most basic form, generative AI uses machine learning to create new content, for example, text, imagery and audio. This differs from other forms of AI which tend to focus on detecting patterns, making decisions, and classifying and analysing data by processing it to gives simple results.

AI covers a wide spectrum of technology but the most common types of AI technologies that assist decision-making in the workplace are:

  • Automated / algorithmic decision-making (ADM): an algorithm is used to make a decision without any human intervention. For example, a psychometric or an aptitude test in the recruitment process which uses pre-programmed algorithms and criteria.
  • Machine learning: focuses on the use of data and algorithms to intimate the way that humans learn, gradually improving its accuracy. One example of its use is a hiring system trained on the data of a company’s historical employees used to decide who to hire in the future.
  • Profiling: uses algorithms to analyse personal data including an individual’s personality, behaviour, interests and habits to make predictions or decisions about them. Such technology could be used to predict whether an employee will meet targets, whether they are likely to turn up late for work or even whether they are likely to quit their job and/or move to a competitor enabling employer intervention to retain key talent.

Typically, AI may be used in recruitment and hiring to review CVs, schedule interviews and evaluate performance through facial and speech recognition technology. Some employers have even begun deploying “chatbots”, otherwise known as virtual assistants, to communicate with candidates and automate other elements of the recruitment process such as collecting relevant recruitment information.

Beyond recruitment, employers are also using AI technologies for monitoring and performance management purposes to include monitoring the location, working hours and productivity of employees, which has becoming increasingly common as a result of the shift to home working (or at least a combination of home and office working) for many. Using AI technologies to track websites and applications that employees use and monitor how long they spend on any given task has been embedded in some employer practices for a considerable time now, but the ability to analyse characters typed on a keyboard or monitor whether an employee is working at their computer screen using eye tracking software are novel.

Further, employees who fail to meet performance metrics may be subject to automatically generated disciplinary recommendations or even termination notices. The capabilities of AI algorithms also extend to use in a redundancy process, influencing decisions about who shall be made redundant through a scoring system for redundancy selection whereby software makes the selections removing the need for human intervention.

Undeniably, AI technologies are everywhere in the workplace and most employers and employees across all industries are either utilising such technologies themselves or are subject to automated decisions on a daily basis, and often without realising.

An unregulated landscape, but for how long?

It therefore may come as a surprise that in the UK there is currently no bespoke legal framework in place to regulate the use of AI in employment (or elsewhere for that matter). Instead, the use of AI is governed by pre-existing employment legislation under the likes of the Equality Act 2010, the UK General Data Protection Regulation (GDPR), the Human Rights Act 1998 and the Employment Rights Act 1996.

The UK has historically been conservative in this space but in recent times the UK government has come under increasing pressure to set out its position on the regulation of AI. This has not only been driven by the exponential growth of AI  but also by the launch of the government’s 2022 Digital Strategy and stated ambition for the UK to become a new technological powerhouse on the world stage. Recent legislative developments by the EU in this regard, together with similar proposals by the US and Canada, have only served to fuel this debate

The Office for Artificial Intelligence published its AI White Paper setting out the UK’s “pro-innovation approach to AI regulation” earlier this year. It adopts a “bottom-up” approach in that it empowers existing regulators (such as the Financial Conduct Authority, the Information Commissioner and the Employment and Human Rights Commission) to regulate the AI landscape with reference to a set of 5 overarching principles:

  1. Safety, security and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

Whereas the EU is seeking to impose a much more stringent regime through the EU AI Act by introducing a new regulator and significant financial penalties for non-compliance: up to 30 million EUR, or 6% of annual worldwide turnover for corporates (whichever is higher).

The Trade Union Congress has called for greater measures in recent years to protect against algorithm discrimination including (i) the creation of statutory guidance for employers on what steps can be taken to avoid discrimination in the workplace,  (ii) a reversal of the burden of proof for discriminatory AI use, meaning that employers would have to disprove that discrimination has occurred (as opposed to the claimant bearing the burden of proof), (iii) a right of explainability giving employees the right to ask for a personalised explanation as to when AI systems are being used, and (iv) the introduction of a statutory duty for employers to consult with trade unions before new high-risk AI is introduced.

For now, at least, the UK seems comfortable with the non-legislative approach to AI regulation envisioned by the March 2023 White Paper. However it is worth noting that the Labour Party has gone on record disagreeing with this strategy, and with the next general election on the horizon there is certainly scope for change in the near future. Until further guidance in this area emerges, employers are well-advised to keep up to date with technological developments and remain live to the potential unintended consequences that may arise out of a largely unregulated AI landscape. Over the coming weeks, we will explore in more detail the current and future use of AI in the workplace and look at practical steps employers can be taking to properly prepare for its use, as well as mitigate any risks which arise in consequence.

Share this post with your LinkedIn network: https://www.linkedin.com/sharing/share-offsite/?url=https://blogs.dlapiper.com/beaware/what-does-ai-and-technology-mean-for-the-employment-sphere/