AI in the UK public sector: use cases and regulatory overview

Artificial intelligence (AI) is no longer a distant technological ambition; it’s fast becoming a core tool in the delivery of public services across the UK. From helping councils allocate resources more effectively to enabling faster decision-making in central government, AI is transforming how the public sector operates. But with great potential comes great responsibility, particularly when it comes to ensuring that these technologies are used ethically, lawfully, and transparently.

This article explores the most relevant and impactful use cases for AI in the UK public sector and provides a clear, plain-English overview of how AI is currently regulated, with a focus on what public sector professionals need to know.

Real-world use cases of AI in the public sector

The public sector is uniquely positioned to benefit from AI, given its wealth of data, broad range of services, and constant drive for efficiency and cost savings. Here are some of the most common and emerging uses of AI in the UK government and public services.

1. Healthcare and the NHS

AI is already being deployed in parts of the NHS to support diagnostic tools, such as interpreting medical imaging, predicting patient deterioration, and flagging high-risk cases. AI chatbots and virtual assistants are also helping patients access information and triage services more efficiently.

2. Social Services and safeguarding

Several local authorities use AI and machine learning models to identify vulnerable children or families who may need early intervention. These systems analyse large amounts of case data to help social workers make more informed decisions. However, this use comes with ethical concerns, particularly around transparency and bias, which we’ll return to shortly.

3. Education

AI is being trialled to personalise learning experiences, automate administrative tasks, and even predict student outcomes. Some universities are also using AI to detect plagiarism or cheating in coursework submissions.

4. Policing and criminal justice

The Police Foundation estimated that approximately 15% of UK Police forces used AI tools in 2019 but more recently, the National Police Chiefs’ Council has stated that all police forces use data analytics. Police forces in the UK are using AI to assist with back office support functions, redaction, facial recognition, predictive policing, and digital forensics. For example, analysing CCTV footage or sorting through digital evidence.

These uses are particularly sensitive and attract considerable scrutiny under data protection legislation and due to their potential impact on civil liberties.  By way of example, the use of live facial recognition technology by South Wales Police was held to be unlawful by the Court of Appeal in 2020.

5. Local government and administration

Councils are deploying AI to improve waste collection schedules, analyse feedback from residents, and streamline customer service. Chatbots, for instance, can answer frequently asked questions, freeing up time for staff to handle more complex queries.

6. Central government and policy-making

AI is being explored to model the outcomes of different policy scenarios and support evidence-based decision-making. HMRC, for example, uses AI to detect fraud and anomalies in tax filings. In January of this year the Government announced a plan to launch a bundle of AI tools, known as “Humphrey” across Whitehall to speed up decision-making and improve government efficiency.

How is AI regulated in the UK?

AI regulation in the UK is still evolving. There is no equivalent to the European AI Act, but there is a growing framework of legal principles and guidance that public sector organisations must follow. Here’s a brief breakdown of the current landscape.

1. Data protection and GDPR

The most immediate legal concern for any AI system processing personal data is compliance with the UK GDPR and the Data Protection Act 2018.

The Information Commissioner’s Office has provided detailed guidance on the application of data protection law to AI implementations. The guidance requires you take into account the standard data protection principles including accountability (usually demonstrated by way of a data protection impact assessment, but also for public sector bodies an equality impact assessment), the lawfulness of each processing operation performed by the AI, transparency and its fairness.

If an AI tool is used to make decisions—such as who qualifies for a benefit or whether a planning application is flagged for review— Article 22 GDPR requires that there must be human oversight and the opportunity to challenge decisions.

2. The UK Government’s AI regulation strategy

In March 2023, the UK Government published its white paper “AI Regulation: A pro-innovation approach”. Rather than introduce “heavy-handed legislation that could stifle innovation” the Government has opted to take “an adaptable approach to regulating AI” based on the following key principles.

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The government wishes existing regulators (such as the ICO and FCA) to use their expertise to modify the implementation of these principles to suit the uses of AI in each regulator’s specific sector of expertise.  A number of regulators have now developed sector-specific regulation based on these principles eg the ICO, as noted above, has issued guidance on use of AI and data protection and for central Government departments, the GDS issued in February of this year the AI Playbook, which requires compliance with 10 key principles.

3. Algorithmic transparency standards

For the public sector, the Cabinet Office and the Central Digital & Data Office have introduced the Algorithmic Transparency Recording Standard (the “ATRS”), a framework that public bodies can use to explain how algorithms are used in decision-making.

Using the ATRS enables public organisations to be open about how AI tools influence decisions, particularly where outcomes affect individuals or communities. Use of the ATRS is a requirement for all central government departments, but it is anticipated that this will be extended to the broader public sector over time.

4. Ethics and procurement

Public bodies procuring AI tools should also consider ethical guidance, such as that from the Alan Turing Institute or the Government Digital Service (GDS).

When contracting for AI, authorities should be aware that their standard contracts (even contracts that have been developed for use of Saas solutions) are unlikely to cover the issues that should be dealt with in an AI contract.  In particular those contracts should be clear on the specification for the AI (and the contractor’s obligation to meet that spec), data protection issues, what data will be used and how and by whom the data will be trained, ownership of the AI algorithm, liability, performance standards, and mechanisms for oversight.

What should Public Sector professionals do next?

If you’re using or planning to use AI in your role, here are some practical steps to ensure you stay compliant and responsible:

  • Conduct a Data Protection Impact Assessment (DPIA): Especially where personal data is involved.
  • Understand the algorithm: Ensure you or someone in your team can explain how the AI system works, at least in principle. Consider how this impacts your use of personal data.
  • Record transparency details: Use the government’s transparency standards to document AI systems and their impacts.
  • Ensure human oversight: Avoid fully automated decision-making unless absolutely necessary and lawful.
  • Keep up with guidance: Monitor updates from central government, the ICO, and relevant professional bodies.
  • Review procurement contracts: Work with your legal team to ensure AI-related contracts are robust and reflect best practices.

Conclusion

AI offers the UK public sector powerful tools to improve services, boost efficiency, and unlock new insights. But these benefits must be balanced with a strong commitment to transparency, fairness, and legality. With the regulatory landscape still developing, public sector leaders and users alike need to stay informed and proactive.

At Geldards, our IT & Technology team regularly advises public bodies on responsible AI adoption, procurement, and regulatory compliance. If you have questions about how AI might impact your service area or need help navigating the legal landscape, our specialists are here to support you.

Like to talk about this Insight?

Get Insights in your inbox

To Top