A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.

Briefing: AI and ChatGP in the workplace: friend or foe?

The topic of AI technologies has been all over the media recently with the recent release of ChatGPT breaking records by attracting one million users within one week of its launch. With the ability to generate human-like responses to text input, this new technology can save time, increase efficiency and improve collaboration. However, says Pam Loch, Managing Director at Loch Associates in this briefing, there are concerns about its impact on productivity, employment and privacy. So, are these new methods of processing data a friend or foe in the workplace?

What is AI and ChatGPT?
AI, or artificial intelligence, refers to the ability of machines to perform tasks that typically require human interactions and intelligence, such as learning, reasoning, problem solving and decision making. ChatGPT takes this even further and is a specific AI model that has been developed to specialise in natural language processing (NLP) tasks such as answering questions almost instantly, summarising information or translating language.

A time for change in the workplace?
One of the most significant benefits of using Chat GPT and other AI tools in the workplace is increased efficiency. These tools can handle routine tasks quickly and accurately, freeing up employees to focus on more complex or strategic work. For example, a customer service ‘chatbot’ can handle simple queries, leaving customer service representatives to deal with more complex problems that require a human touch. They can also improve the customer journey by providing personalised and timely responses. This is particularly important in sectors such as healthcare, where patients may have questions or concerns that need to be addressed promptly but do not necessarily require input from a medical professional. A virtual assistant powered by ChatGPT could provide patients with information on their medical conditions or help them schedule appointments with healthcare providers.

However, the use of AI in the workplace also raises concerns about the impact on jobs and employment. Some worry that AI will lead to significant job displacement and leave people without work, especially those in customer-facing roles, as has been seen with the introduction of self-service checkouts. Whilst it is true that some jobs may be automated, there are, however, many new opportunities that could emerge as AI continues to evolve. For example, the development and maintenance of AI systems will require skilled workers in fields such as data science, machine learning and software engineering.

Another aspect that the use of AI can impact is productivity as it can lead to employees using tools such as ChatGPT instead of conducting what can be seen as ‘boring’ tasks. It’s important therefore to ensure your staff are on board and recognise the positive benefits of AI.

Another factor to keep in mind is that it could result in employees using AI to take over tasks their employers want them to do themselves, e.g. their own research and/or analysis. This has led to employers including Goldman Sachs, Deutsche Bank and Amazon restricting employees’ use of ChatGPT. Schools have also been taking action to stop its use as a result of cheating concerns because the technology does not only give you the answer, but also shows the workings – the lack of which is usually used as an indication of cheating.

What are the legal implications?
Without regulation, the use of AI processes in the workplace can raise other legal concerns, one of which is the potential for discrimination. AI technologies can potentially perpetrate or even exacerbate discriminatory practices if not designed and implemented with care. Some key concerns are set out below.

  • Bias in AI: AI systems are only as unbiased as the data used to train them. If the data used to train an AI system is biased, then the system will learn and effect that bias. For example, if an AI system is trained on resumés from a company that historically only hired men for a certain job, it may learn to prefer male candidates. This could lead to discrimination against qualified female candidates, potentially breaching the Equality Act. It is therefore important to use diverse and representative data to train AI systems and to regularly audit and test for bias.
     
  • Discrimination in decision-making: AI systems are increasingly being used to make decisions that can impact people’s lives, such as recruitment, promotions and performance evaluations. If an AI system is making decisions based on discriminatory or biased factors, it can lead to unlawful discrimination against certain groups of people and could potentially overlook candidates that could bring transferable skills, those who are looking to make a career change or individuals returning from a career break to raise children.  It’s important to ensure that AI systems are transparent, explainable and auditable so that any potential biases can be identified and addressed. It is also essential that the relevant policies are accessible to those who are impacted by automated decision-making.
     
  • Accommodation for disabilities: AI technologies can be a useful tool for people with disabilities in the workplace, but it’s important to ensure that they are accessible to everyone. Some types of technology work by facial expressions and language, which could unfairly disadvantage those with visual or hearing impairments.
     
  • Data privacy: AI technologies often require large amounts of data to function, and this data can contain sensitive personal information. As an organisation that holds this data is a data controller, it is essential to ensure that data is collected, stored and used in compliance with data privacy regulations and that your privacy policies are updated to incorporate the use of this technology.

 

Despite the potential failings, AI can be used as a tool to protect people from discriminatory treatment and harassment too. Some employers have utilised software to identify the use of offensive and discriminatory language in employees’ emails as a way of combating the use of racist, misogynistic and homophobic language. Again, should an employer wish to implement this type of technology, they would need to review and potentially introduce relevant policies.

Implementing any change, especially the introduction of new technology, should be carefully considered with the support of HR and legal professionals. For any support you need reviewing or updating policies, or assistance with auditing, our team of experts at Loch Associates Group are happy to discuss this with you.