As we start into a new year, it’s always good to review the previous year and plan for the next. In the world of data protection and AI Governance, we had a busy 2025 and we are likely to have just as busy a 2026.
In 2025, Ireland’s Data Protection Commission reached its full complement of three commissioners with the appointment of Niamh Sweeney, who joined commissioners Dr. Des Hogan and Dale Sunderland. The third commissioner’s appointment came as the DPC anticipates expanded responsibilities.
In 2026, the DPC will assume a more prominent role in AI regulation. Ireland designated 15 National Competent Authorities under the EU AI Act in September 2025, becoming one of the first six Member States to reach this milestone. The DPCis expected to play a significant role as one of these designated authorities, particularly where AI systems intersect with data protection and fundamental rights.
This year, the DPC continued its pattern of significant enforcement actions. The most substantial fine was imposed on TikTok, totalling 530 million euro, for violations related to transparency requirements and the lawfulness of transfers of European user data to China. The decision addressed TikTok’s failure to adequately assess Chinese law and to verify that supplementary measures and standard contractual clauses were effective in ensuring data protection equivalent to EU standards.
But it’s the DPC’s approach to AI that reveals the most about its evolving strategy. In March 2025, LinkedIn informed the DPC of its intention to train proprietary generative AI models using personal data of LinkedIn members in the EU/EEA. Following detailed review and extensive engagement, the DPC identified multiple risks and issues, leading LinkedIn to adopt significant changes including improved transparency notices, reduced data processing scope, and enhanced protections for users under the age of 18.
This interventionist approach signals a shift from reactive enforcement to collaborative pre-deployment engagement with AI developers. This is a model that the DPC has used with Meta, and other LLM providers also. The question for 2026 is whether this model will become the norm. At a recent Hackathon event hosted by Open AI, Gemma Harris, assistant commissioner with the DPC said that organisations must build systems that people can trust, and the DPC will engage to ensure that data protection requirements are met. Organizations can expect continued early-stage scrutiny of AI systems.
Looking ahead to 2026, we anticipate several developments. First, we may see increased enforcement focus on automated decision-making systems in financial services and insurance, particularly where AI systems determine creditworthiness, insurance premiums, or claims outcomes without meaningful human oversight. The DPC will likely scrutinise whether organizations have conducted robust Data Protection Impact Assessments (DPIAs) for these high-risk processing activities and whether data subjects are being properly informed of their Article 22 rights and whether transparency requirements are met.
Second, workplace AI surveillance may emerge as a regulatory battleground. Employee monitoring systems using AI—from productivity tracking to sentiment analysis tools—will face heightened scrutiny regarding lawfulness of processing, proportionality, and workers’ fundamental rights. Organisations deploying such systems should anticipate DPC investigations into whether legitimate interest assessments adequately balance employer needs against employee privacy.
Third, the intersection of AI transparency obligations under the EU AI Act and GDPR’s Article 15 access rights will create compliance challenges. Organisations must prepare for individuals exercising their right to obtain meaningful information about AI logic and decision-making processes. The DPC will expect clear, intelligible
explanations—not technical jargon or algorithmic black boxes. This was highlighted by Gemma Harris at the Hackathon and also by various DPC speakers at a Tech DPO Network hosted by Meta in Q1 2025.
Finally, we predict the DPC will issue sector-specific guidance on AI governance, particularly for healthcare and pharmaceutical companies processing sensitive health data through AI analytics platforms. Organisations that have relied on vague consent mechanisms or questionable legitimate interest grounds for AI training will find themselves in the regulatory crosshairs.
As Ireland positions itself at the forefront of responsible AI governance in Europe, organisations that engage proactively with the DPC’s pre-deployment framework may find themselves better positioned not just to meet regulatory requirements, but to lead in an era where privacy-respecting innovation becomes a competitive
advantage. Against this, providers of LLMs may be wary of attracting regulatory scrutiny voluntarily. The question isn’t whether 2026 will bring heightened scrutiny but whether organizations will seize this moment to demonstrate that cutting-edge technology and robust data protection can advance hand in hand.
Please contact info@pembrokeprivacy.com to learn more about our how we can help with your data protection and privacy needs.