Artificial Intelligence (AI)
As discussed in our last post on AI, Artificial intelligence is a broad term used to describe an engineered system where machines learn from experience, adjusting to new inputs, and potentially performing tasks previously done by humans. The field of AI is rapidly evolving across different sectors and disparate industries. There is no denying that AI technology is rapidly expanding around the globe, and it cannot be described as anything other than revolutionary. It is generally accepted that AI will transform the way we live and work. Like any other rapidly developing technological developments, the law is scrambling to keep pace with technology. The currently agreed definition of AI by European lawmakers is as follows: “Artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments.”
The EU’s AI Act focuses primarily on strengthening rules around data quality, transparency, accountability and importantly, human oversight. The purpose of the AI Act is to guarantee that AI systems, both developed and utilised within Europe, conform to EU rights and values. These encompass principles such as human supervision, safety, privacy, transparency, impartiality, and the promotion of social and environmental welfare.
It is important to remember that AI does not have the capacity to understand whether a decision made is inherently or obviously right or wrong in a moral or ethical sense. Therefore, there is a broad spectrum of risk inherent in the use of any AI system. Consequently, the cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person, with the legislation setting out four risk tiers: unacceptable, high, limited and minimal.
We know AI has the potential to create significant benefits across society, but it undoubtedly also poses serious concerns. Collectively, we need to ensure this massively powerful technology, which will have such huge impact on our day-to-day lives, is implemented safely and in a way we can trust. Margaret Vestager, Executive Vice-President for a Europe Fit for the Digital Age has said that
“on artificial intelligence, trust is a must”.
This is where people come in. The burning question facing all managers connected in any way with the governance of AI is how to build a future in which we can harness the potential benefits of AI, while mitigating the risks outlined above and avoiding its pitfalls.
As part of the rapidly developing AI landscape, there is a fundamental requirement to develop human oversight and trustworthy practices from the outset for many reasons not least reputational trust and avoidance of attention/sanctions/fines of regulators but of course the avoidance of bias and unfairness generally.
There is an absolute requirement for a human centred oversight and governance approach to AI- the EU AI Act itself requires human oversight of certain AI systems and contains, in its current form, no less than 25 references to human oversight.
Specially trained governance professionals will play a crucial role in the inevitable and widespread design, roll out and deployment of AI. Existing parallel professions, such as privacy, cybersecurity, data governance, risk management, compliance and organisational ethics, can and must skill up to rapidly fill this need.
To meet this demand, the IAPP has developed the Artificial Intelligence Governance Professional (AIGP) certification and training for the emerging AI governance profession. IAPP President and CEO Trevor Hughes, CIPP, said. “Defining a common body of knowledge, building training and creating a shared language for AI governance are massively important steps towards safety and trust in AI.”
Pembroke Privacy’s AI Governance Professional Training is for professionals tasked with implementing AI governance and risk management in their organisations. It provides baseline knowledge and strategies for responding to complex risks associated with the evolving AI landscape. This training meets the rapidly growing need for professionals who can develop, integrate and deploy trustworthy AI systems in line with emerging laws and policies.