Blog

AI Regulation and Emerging Risks

AUTHORS

Steven Marshall, CEO
Keith Vallance, Cyber Security Advisor

DATE

11.12.2023

SHARE

   

The emergence of Generative AI and Large Language Models (LLMs), as exemplified by ChatGPT, has drawn much media attention to the potential of catastrophic risks arising from misuse of the technology, risks that were at the forefront of the recent AI Safety Summit in the UK. Whilst existential threats make for good headlines it’s also clear that some AI risks are not just around the corner but with us today – so where should we look for a governance model that can be readily applied to AI?

The EU's Artificial Intelligence Act is currently grinding its way through the EU legislative machinery and is expected to be enacted in 2024. Upon becoming law, this Act will intersect significantly with the well-established General Data Protection Regulation (GDPR), with additional responsibilities inevitably landing on the desk of Data Protection Officers (DPOs). At the recent #RISK London conference there was strong attendance at sessions focussing on data privacy and AI, so it’s clear that those with data protection responsibilities are desperate to get a grip on the impact this regulation may have on their roles.

The EU Data Protection Supervisor (EDPS) is already advocating that “data protection authorities are designated as national supervisory authorities under the AI Act” as a means as it puts it “to ensure trustworthiness”. CEDPO (Confederation of European Data Protection Organizations) seems less keen on a supervisory land-grab in their guidance and considers that “not all AI and machine learning applications will result in activity which amounts to personal data processing” pointing out that the AI Act recognises that AI & ML represents “an area of broader ethical and societal concern that will often fall outside the scope of the GDPR”, citing the risks of AI perpetuating cultural biases or discriminatory practices, or posing real-time safety and security risks, as areas where GDPR has little relevance or application. There are some calls for DPOs to incorporate the role of AI Ethics Officer within their remit, but not all will welcome such a shift in their orbit or have the technical grasp of this fast-moving technology.

While the EU is legislating to establish a risk-based framework for AI governance, the UK is advocating for a 'contextual, sector-based regulatory framework.' This proposed framework looks set to be rooted in the UK's existing tangle of regulators and laws. The UK's approach, outlined in the white paper titled "Establishing a pro-innovation approach to AI regulation", relies on two primary components. First, it spells out the AI principles that current regulators will be tasked with implementing. Second, it introduces a series of new 'central functions' designed to bolster these regulatory efforts. It’s true that some sectors in the UK already have well established regulatory regimes that AI oversight could reasonably slot into, however other sectors have little or no regulation in place which suggests that AI governance overall will be patchy as a result.

The governance of AI in the UK is also poised to be significantly influenced by the Data Protection and Digital Information Bill, currently under scrutiny in Parliament. The Bill is a deregulatory proposal that aims to reduce the burden on businesses of complying with data protection law. For example, the Bill removes the prohibition on many types of automated decision making, instead shifting the responsibility to data controllers to apply suitable safeguards. Removing such protections would seem at odds with the need to curb some of the areas of potential misuse of AI technology.

Rather than relying on regulation alone as a model for AI governance it appears that the wise course for organisations engaging with AI technology is to take the management of the related risks into their own hands. The International Standards Organisation has published a framework for AI risk management in the form of ISO 23894, which aims to assist organisations in managing the specific risks associated with AI systems. Armed with such a framework, organisations can tailor a risk management approach that best suits their business needs and objectives, incorporating compliance to regulations as they evolve.

Clearly there are many challenges that face any organization involved in the development of AI who wish to get ahead of the curve and be ready for future regulation. While it’s tempting to dedicate the task of management of such a project to either the DPO or CTO, it’s clear that more balanced approach will yield better risk mitigation depending on where ultimately the regulatory framework ends up and, also understanding the challenges that are being faced.

Even mapping a simple regulatory concept from the data privacy world into it’s equivalent in AI governance is challenging. Under Article 17 of the UK GDPR individuals have the right for personal data to be erased – or “the right to be forgotten” as it’s frequently referred to. While in the data privacy world, this is well understood, what does it mean when this data has been used to train an AI model?

While the source data may easily be deleted, the “knowledge” from that data will remain in place in any models which incorporated that data in it’s training set. Until recently it’s likely that data may have been obfuscated enough that personal data could be extracted from the output of a model. However, with the introduction of LLMs the detail that can be output is significantly higher. While guard rails can be put in place to help recognise personally identifiable data and not display them, the output of LLMs can be far more subtle, and private data can still sometimes be inferred from the output even with guard rails.

In addition, does a “right to be forgotten” include being removed from the training of a model? LLMs are increasingly more complex in how they are trained, and the removal of a small number of data items from a model is not easily done without the full retraining of a model. This can be very expensive from both a time, CPU usage and hence cost perspective. Especially if these requests are frequent. For most purposes, it’s effectively impossible, although there is research currently taking place which my make it easier in the future (although evidencing that the information no longer exists is a challenge in itself).

Even with this simple regulation, the expertise of data privacy, technical, legal, and ethical experts is probably required to safely navigate the emerging landscape.

While regulation is developing it’s important to keep and open mind where it will end up, even if now regulators are focusing on the data privacy angle. Without in-depth expertise and guidance from a whole team of people, along with a way of tracking and monitoring emerging risks, it’s going to be very difficult to end up in the right place when the regulation settles down.

Discover what RISKGRID has to offer