“For companies, it is not just about limiting risks or complying with the law,
but rather about re-branding themselves with different values, far away
from exploitative techniques”
– CPDP panelist
I had the privilege to attend the International Conference on Computers, Privacy and Data Protection – CPDP 2022 last week . This three-day conference bought together different stakeholders, from engineers to privacy scholars, lawyers, regulatory bodies such as the European Data Protection Board, to activist – and even artists – for discussions about the future of our data in the digital space.
Six years after the introduction of the General Data Protection Regulation (GDPR) in the European Union, this year’s theme was ‘Data Protection & Privacy in Transitional Times’ as we are entering a (challenging) era where new regulatory proposals are emerging. These new proposals will have a significant effect on organisations that use and process data, particularly through Artificial Intelligence tools and social networks. With new emerging technologies in the last few years, such as biometric identification, digital covid passes, automated cars, and the introduction of virtual platforms, this year’s conference gave special attention to discussing the core values, fundamental rights and principles that will guide our future, and the role of technology within it. Many CPDP panels were dedicated to discussing how these new regulatory proposals will affect us, and the challenges they will bring. These new proposals can be summarized under four main regulations – Digital Service Act (DSA) , Digital Markets Act (DMA) , Artificial Intelligence Act (AI Act) and the Data Governance Act.
Especially significant is the role of Artificial Intelligence and the AI Act in this year’s conference. This new regulatory instrument will classify systems depending on their (high) risk. As a result, there was a strong push by many panellists for the necessity to include a human rights impact assessment within the clauses of the AI Act.
The AI Act also expands upon the concept of personal data as we are familiar with through the GDPR. Although AI systems may not directly process personal data, they still have significant impact on individuals, and the developers and users of these systems will be held responsible. Thus, the AI Act, instead of empowering end-users such as with the GDPR through data subjects’ rights, is focussed on implementing obligations on developers to comply with certain standards. Moreover, different panels have also accounted for gender-neutrality in AI systems , and the discussion to include data protection rights and other privacy preserving design practices (legal and technical) to prevent discriminatory practices.
In addition, there are a few relevant takeaways from the panels on new regulatory proposals. On the one hand, regulatory proposals such as the DMA and DSA share common values such as fairness, transparency, purpose limitation and data minimization at their core, as well as restrictions on sensitive data processing and an increased focus on vulnerable individuals. Furthermore, some panellists highlighted the lack of regulatory oversight: we still miss the concrete guidelines that are essential in operationalising the new laws. There is the necessity for stronger enforcement and standardization, and further discussion with key stakeholders in order to clarify and create specific definitions and criteria within these new regulatory instruments. This is precisely why the European Data Protection Supervisor, in his opinions on digital regulations , has been consistently calling for “structured” mechanisms for cooperation among regulators and explicit provisions on information sharing in specific cases and investigations. Similarly, there was an emphasis on the difficulty of companies to adjust to transparency requirements under current regulations. Companies voiced concern with regards to burden of documentation, and the need to facilitate compliance processes under unclear standards.
Dark patterns and online manipulation were also recurring elements at CPDP. The new guidelines on dark patterns triggered some fiery discussions – how can we assess persuasive designs and how do we distinguish them from manipulative systems? Dark patterns are still not clearly defined in legal instruments. They can range from manipulative cookie banners to AI manipulative algorithms that trick users into buying certain products, and current policies are not equipped to deal with these types of manipulative designs which are increasingly prevalent in the digital realm.
To conclude, the CPDP offered inspiring and cutting-edge ideas for the future of privacy protection, and for the digital economy. The GDPR is still considered a relatively ‘young’ legal instrument, born only six years ago, and now we face new legal instruments that will challenge our practices with new legal tools. Time and time again, experts noted that we need more discussion with regards to the interplay between the GDPR and newly proposed regulations such as the DMA, DSA and the AI Act. During CPDP, there was a strong focus on the importance of human rights as interrelated with privacy and data protection law, and the need to balance fundamental rights such as freedom of expression with data protection law. There is a strong interest in the EU to keep investigating the limitations of current regulations and the impact of new emerging technologies for the wellbeing of our democracy. It was an engaging and informative few days in the heart of Europe, discussing the future of privacy.