Artificial Intelligence: An Overview for Employers

Artificial Intelligence (“AI”), and more specifically generative AI such as ChatGPT and Google’s Bard have captured both the headlines, and the imagination of the public recently. Employers are quickly examining their processes to ensure they can both meet the challenges AI may pose and to put themselves in a position to exploit the latest technological innovation.

We examine in this article what impact AI might have on the employment relationship, and what issues employers should be mindful off as they look to implement AI in their workplaces.

Recruitment

It is only fitting we begin our overview at the beginning of the employment relationship. AI may be used in a number of ways during the recruitment process, from the drafting of job adverts, and letters to applicants using generative AI, to the making of recruitment decisions by specifically trained AI systems.

HR professionals should be mindful when using AI to generate job adverts and recruitment letters. Documents should be reviewed and checked to ensure they do not include discriminatory language or unintentionally exclude particular groups from applying. Case law in this area has found job adverts, and in particular the forum used for such adverts could potentially be discriminatory under the Equality Act 2010 (the “EqA 2010”).

The more significant concern is the use of AI system to make recruitment decisions. Whilst such a proposal may appear dystopian, or the subject of science fiction novels, several larger employers have already tested and implemented AI in their recruitment process. Amazon’s efforts in the area was very publicly scrutinised in 2018 when its secretive AI recruitment system came to light. Amazon had spent a significant amount of money developing and implementing the system, it had fed the algorithm 10 years’ worth of job applicants’ data and asked it to rate applicants out of five stars. In practice Amazon used the system to sift through applications and then recruited a small number of the top ranked applications. The job offers (and rejection letters), were entirely determined by the system. This in principle sounds like an efficient use of AI to maximise HR time and ensure a business recruits the best possible candidates. After all, why sift through several hundred applications when an AI system can do it in minutes.

However, as AI experts will attest, a system is only as reliable as the information you feed it. The system has no means of recognising inequality in its dataset. No way to tell if the 10 years’ worth of applicants’ data disproportionately over-represented a particular group, whether individuals of a particular race, gender or with particular disabilities are under-represented or unlawfully not recruited. The AI system may exasperate any inequality already present in the data it receives.

Amazon’s dataset consisted of applications predominately from men. The system therefore ranked applications from men higher than that of women. Men in the data set had more applications, they were more likely to receive job offers and therefore viewed by the AI system as being more likely to be a successful Amazon employee.

Similar issues have arisen in other fields with biased datasets. Courts in the US hit headlines a number of years ago for the use of AI in sentencing and bail application decisions. The system caused controversy when it was found to be disproportionately flagging African Americans as being at higher risk of re-offending, resulting in their sentences being longer and bail applications being disproportionately more likely to be refused by the AI system.

If you are using AI or looking to use AI to make recruitment decisions, carefully examine the data the AI system has been trained on, look for over or under-representation of particular groups and examine the results the system produces to spot patterns or areas of potential discrimination. For example, does the system produce more male instead of female candidates, does it favour people of a particular race or ethnicity or of a particular educational background?

The use of a flawed AI system to make recruitment decisions could give rise to a claim for discrimination.

Organisations should also be mindful of the information law implications of such a system. Article 22(1) of UK GDPR places significant restrictions on automated decision making. The UK GDPR prohibits solely automated decisions which produces legal effects for the data subject or similar significant effects, e.g, if the AI system was completely responsible for the decision to recruit or not, or whether an individual advanced to the next stage of a recruitment process, with very limited exceptions.

The exceptions are limited to three specific circumstances:

  1. Where the decision is necessary for entering into, or the performance of, a contract;
  2. When permitted by law which also lays down suitable measures to safeguard the data subject; or
  3. With the data subject’s explicit consent.

Even if one of these exemptions apply the data controller still has to put additional safeguards in place.

If you are unsure about the use of an AI system and whether it would breach your duty as a data controller under UK GDPR, seek legal advice from an Information Law expert.

The Workplace

AI is slowly seeping into use within the workplace with a number of companies recently announcing the roll out of generative AI tools, based on ChatGPT. This will only become more widespread with the rollout by Microsoft of its generative AI tools, aptly named “Copilot” within its widely used Office 365 suit of applications.

Organisations may find their employees using AI to perform parts of their job, from the drafting of emails and correspondences to undertaking research tasks. Some organisations have already implemented AI policies, or adapted their IT and Computer use policies to address the use of AI in the workplace. Some organisations have completely banned its use. Both Apple and Samsung have recently announced bans on their employees’ use of generative AI tools. The latter recently suffered a leak of its sensitive code as a result of its employees’ use of ChatGPT to write computer code.

In addition to the potential security risks posed by the use of online AI tools, organisations and employees should be wary of the quality of the work produced. Recent high-profile cases in both the US and UK have highlighted the potential issues with AI in courts by litigants and their representatives. A lawyer in the US and a litigant in person in the UK were found to have used ChatGPT to cite cases in support of their legal arguments in submissions to court. This whilst not illegal, did leave both somewhat embarrassed when the cases were found not to exist, or where they did exist were found not to contain the passages or points cited by the generative AI tool. Both cases were as a result of a now commonly known problem with early generative AI tools, their tendency to “make things up”. In the absence of a direct answer or information to answer a query, generative AI tools will sometimes generate its own answer. The tools are not sophisticated enough yet to understand the implications of this or to highlight to its user where it does not have a particular source for its answer.

In addition to the use of AI by employees, HR professionals may find increasing uses for AI tools, from the drafting of policies, letters to the decision making in promotion, disciplinary or grievance processes. The same considerations outlined above with regard to the use of AI in recruitment apply here too. The documents and work produced by AI tools should be carefully scrutinised to ensure accuracy, and to minimise the potential risk of the AI system exasperating inequality and committing the organisation to a discriminatory course of action.

It is evident that AI will have a profound and multifaceted impact on the workplace and, if used correctly, with clear boundaries and rules for employees on appropriate use, could have significant cost savings in terms of increased efficiency for many employers.

It should not be left to employees to determine when and how to use AI in the workplace, employers should develop an AI strategy and provide clear policy and guidance for employees on appropriate use of AI.

If your organisation is concerned about the use of AI in your workplace and wish to discuss your concerns with an Employment Law professional, or you organisation wants to consider developing an AI policy for employees, you can contact a member of the team HERE.

Like to talk about this Insight?

Get Insights in your inbox

Subscribe now
To Top