Artificial Intelligence and the Public Sector

Share this

Artificial Intelligence and the Public Sector

Introduction

The potential benefits of implementing artificial intelligence (AI) within a government department or agency can be significant and even transformative for the efficiency and effectiveness of that organisation – so long as it is set up to succeed. The use of AI, particularly in outsourcing deals, can give rise to a number of novel and differently nuanced issues that, if not addressed at the outset, could create some significant issues for the future.  This article discuss some of the key issues that may arise.

What is AI?

There are several competing definitions of AI and much debate about what is, and is not, AI.  However, in short, AI is the simulation of human intelligence by machines, often classified as either ‘strong / hard’ AI (i.e. true human mimicry, often the focus of Hollywood) or ‘weak / narrow’ AI (i.e. focussed on a narrow task).  One of the most common applications of AI is machine learning (i.e. the ability of a machine to improve its performance in the future by analysing previous results).

Transparency and Explainability

For governance departments and agencies, there are two key areas in which ensuring transparency is essential.

First, there is a need for user transparency and informed consent to AI decision making.  If an organisation is using an AI tool, it should ensure there is transparency about the fact an AI tool is being used and about the purpose of the AI tool, as well as the capabilities, limitations and potential risks of that system.

Second, public sector bodies should ensure that use of an AI system does not hinder compliance with audit and reporting obligations.  Government organisations are subject to a number of requirements for transparency and accountability, including with respect to the audit of their contracts.  Such monitoring is easier within the traditional sourcing environment, when a supplier can be audited mainly through a review of documents, reports and procedures. In the context of AI, it is more difficult to work out how the AI system is working (and evolves) through the service.  It may well be that instead of the traditional accountants and audit professionals, additional forensic IT experts should be added to the team that performs the audit.

Accuracy, robustness and security

AI systems need to have a predictable level of accuracy.  Outcomes should be re-producible and errors should be capable of being understood and therefore fixed.

Machine learning-based systems are ‘trained’ via a positive feedback loop. In theory, as the system is exposed to more data, it ought to continually improve and the accuracy of its ‘decisions’ should therefore increase. As with most computer systems, however, the old adage of ‘garbage in, garbage out’ still applies. It is quite possible that, if an apparently highly performing system is continually exposed to poor quality data, or data which suggests incorrect decisions are in fact correct, then its accuracy – in objective terms – will gradually diminish (but its accuracy in performance terms will be high).

Any biases, inaccuracies or bad assumptions which are present in the human users (whose actions form the training data used to train the system) will be reflected in the decisions made by the trained system. Similarly, if the system is continually being fed data from different sources, and one source is continually providing incorrect feedback on the decisions taken by the machine learning system, that will impact upon the accuracy of the system.  It will be a particular concern for public bodies to ensure that their AI systems produce accurate and impartial results, free from bias and prejudice, in particular with respect to decisions that could have an adverse impact upon an individual’s life.  Departments will therefore need to ensure that datasets are balanced and representative, and should work with developers during the design phase to ensure that the algorithms themselves do not contain biases.

Security will also be a key priority, in particular ensuring resilience against attacks and manipulation of data or algorithms.

Accountability

Organisations harnessing AI systems need to maintain accountability for that system, including with respect to the ethical acceptability and public trustworthiness of its outcomes and processes.  Human oversight will remain the standard for managing AI risk at all stages, not just with respect to accuracy, but also about recourse and established procedures for rectifying issues, based on clear record keeping.  This means that that humans must remain “in the loop” throughout every stage of the lifecycle of the AI system.

Data Protection

AI poses some obvious data protection concerns. The main concerns stem from the fact that any AI system by definition is based on the processing of a large volume of data. Initially, such data may not be personal data within the meaning of the General Data Protection Regulation (GDPR), but it can become personal data (i.e. it is attributable to a specific person) or even sensitive data, as a result of deep pattern matching techniques and other processing that AI might perform.

This may result in data being processed in a manner for which consent had not been granted, without any other relevant justification being applicable, or beyond the boundaries set out by earlier consent. Furthermore, the AI solution may end up making its own decisions about the data management, thus changing the purposes laid out by the data controller who should be ultimately responsible for the data processing.

All such issues will have to be carefully addressed in the design phase, when it is being decided how an AI solution will function and what technical controls can be applied, and also in any agreement between parties involved in using that AI solution to process data.

A flexible risk based approach to the entire AI lifecycle

There is no “one size fits all” approach to the issues outlined above, and so most guidance recommends adopting a risk based approach.  A key starting point will be ensuring that your organisation has a multi‑disciplinary governance framework in place which builds compliance and ethical thinking into every level of the organisation (either through adapting existing internal governance structures or implementing new structures where necessary).  This foundation needs to be established from the top to the bottom of the organisation, and throughout the end‑to‑end AI lifecycle, from initial use cases through to the design, development, testing, implementation and maintenance of the AI solution.

Further Reading

Various organisations have developed guidance and regulatory proposals for the use of AI in the public sector.  We have set out a selection of key documents below:

 

 


Share this