What Your Organisation Needs to Know About AI Risk and Privacy

What your organisation needs to know about AI Risk and Privacy

“The futures of privacy, AI and trust in society are tied more closely together every day and highlight the imperative to develop AI governance skills at scale, and quickly”

– Trevor Hughes, President and CEO of the International Association of Privacy Professionals.

In our previous article on the European Union’s proposed AI Act (available to view here), we outlined its key provisions. The focus of this article is on practical considerations for organisations seeking to understand the risks associated with the use of AI internally and ensure data protection compliance.

Risks Associated with the Use of AI Internally

1. Confidential Information and Personal Data

The first risk to address is the potential for confidential information and personal data to be released.

By users inputting confidential information into generative AI, there is a risk that confidential information including personal data will be outputted.

Content generated by AI may include personal data which falls within the scope of the General Data Protection Regulation which requires a lawful basis for the processing of personal data. Use of such data without the appropriate lawful bases for processing may expose your organisation to liability for violation of the GDPR.

Bad actors may also gain access to personal data through hacking and other breaches which can have catastrophic effects for individuals whose data is compromised and for the organisation which may face fines, sanctions and reputational harms.

This existing risk is why OpenAI and Google, both developers of AI, have warned against including confidential information when engaging with chatbots.

Data which has not been appropriately anonymised or pseudonymised has already been the subject of several workplace issues. In one scenario confidential information was released where an employee inputted a recorded meeting into ChatGPT for further analysis in other scenarios, confidential data was input into AI apps where employees had been providing proprietary data to ChatGPT. There are also significant Intellectual Property issues that arise.

2. Inaccuracy of Data

While powerful, artificial intelligence is not 100% accurate. An AI hallucination is when an AI model generates inaccurate data by presenting the inaccurate data as if the data is accurate when it is unsure how to respond to a query. Of course, this raises red flags when using AI in the workplace context as the response may sound plausible.

Article 5 of the General Data Protection Regulation lays out the key data protection principles. These include lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality and accountability.

Article 5(1)(d) deals with the accuracy principle, providing that controllers of personal data are required to ensure the accurate and up to date nature of personal data. Additionally, reasonable steps must be taken to ensure that data which is inaccurate for the purposes for which it is processed is rectified or erased without delay.

Because AI can provide inaccurate information, reliance on data produced by artificial intelligence can result in an organisation failing in their obligation to keep accurate data. Additionally, reliance on inaccurate information can similarly conflict with the fairness principle under the GDPR where the inaccurate data is used to make inferences or decisions about people which do not match their identity.

The United Kingdom’s Information Commissioner (ICO) advises that if AI systems are used to make inferences about people, that they are statistically accurate for the purposes of the inferences drawn.

The AEPD, the Spanish data protection authority, has provided guidance on the risk of inaccuracy in data generated by AI. It notes that in the context of processing data, the accuracy principle should be implemented in data inputted and in the use of data outputted.

An illustration of how crucial accurate data is can be seen in the example of credit scoring systems. Here, inaccurate data could have significant consequences for individuals who may be unable to take out credit on the basis of inaccurate conclusions.

Due to the far-reaching implications of inaccuracy of personal data in decision-making and the stipulations by data protection regulations on the importance of accuracy, emphasis should be placed on minimising the use of inaccurate data as far as practicable.

3. Bias and Discrimination

It is well-documented that use of artificial intelligence can lead to bias and discrimination.

As noted above, one of the principles underpinning data protection is fairness. Bias and discrimination are, by definition, unfair for those who are affected by them. Additionally, Article 9 of the GDPR , sets out additional obligations when processing “special categories of personal data”.
The special categories of personal data are often grounds for discrimination and bias. These special
categories include:

  1. Personal data revealing racial or ethnic origin
  2. Political opinions
  3. Religious or philosophical beliefs
  4. Trade union membership
  5. Genetic data and biometric data processed for the purpose of uniquely identifying a natural
    person
  6. Data concerning health
  7. Data concerning a natural person’s sex life or sexual orientation

Why might the use of AI lead to bias and discrimination? AI systems may be trained from datasets which are statistically unbalanced or reflect past discrimination imposed against groups of people. This further continues a discriminatory cycle, and as organisations which push for fairness and equality, conflicts against core values.

The issues of bias and discrimination have been flagged most regularly in the context of recruitment practices. In fact, AI systems falling within the scope of employment, worker management and access to self-employment are described as “high risk” under the EU’s proposed Artificial Intelligence Act and will need to be registered in an EU database. An example of the risks of bias and discrimination in the processing of personal data via AI in practice can be seen in a much reported scenario where an AI tool preferred male candidates to females by penalising applications which the tool identified as being submitted by women.

Organisations, therefore, should be cognisant of the potential for bias and discrimination in their internal practices when leveraging AI. Testing at each stage in the project life cycle and testing on implementation can help identify problems.

What Now?

Identification of the above risks only begins to scratch the surface of what AI, inappropriately managed and contained can do. Pembroke Privacy is continuing to keep our clients informed of the risks associated with these technologies and how to avoid such risks emerging by implementing a robust Responsible AI framework. The team at Pembroke Privacy would be happy to discuss how we can help you with risk-identification within your organisation and managing such risk.

Pembroke Privacy is an official partner of the International Association of Privacy Professionals (IAPP) in Ireland to deliver globally recognised professional certified training including CIPP/E, CIPM, CIPT and now, Artificial Intelligence Governance Professional (AIGP) training.

Our first Artificial Intelligence Governance Professional (AIGP) course takes place on 23 & 24 November 2023 live online via Microsoft Teams. We’d like to invite you to avail of this new training opportunity! You can book your place on the course by clicking on the link below.

Author
Facebook
Twitter
LinkedIn

Send an enquiry

Name
Newsletter Subscribe
Hidden
Hidden
Hidden

Contact Details

Get in touch