AI – IS IT THE DPO’S RESPONSIBILITY?

Today, we talked to Natalie — our Head of Consultancy — about a topical issue that’s becoming increasingly relevant and concerning, particularly in the world of data privacy: the lightning-speed development of AI.

Programmes like ChatGPT have quickly risen in popularity and are fast becoming mainstream due to the speed at which they can perform tasks that previously would have taken a real person a lot longer to accomplish. As promising as it sounds, there’s also an element of concern regarding data security.

But what is Natalie’s take on all this recent development?

“HOW DO YOU FEEL THE CONVERSATION ABOUT AI HAS CHANGED RECENTLY?”

Well — where to start…?
Recently, I feel like we are swimming (or drowning) in information about AI. Artificial intelligence seems to be the latest hot topic, and it’s not hard to figure out why. I see countless AI-related posts on LinkedIn and have been in many conversations about it in my professional life already, but now it seems my personal life is on the hit list too. Social media posts, peers referring to AI models on apps such as Snapchat, and of course, ChatGPT cropping up in every eventuality.

“HOW USEFUL IS IT FOR YOU?”

Sure, I have played around on there. It’ll create some mildly amusing rhymes in a variety of dialects — who said data protection can’t be fun? I have asked it to create learning objectives for me, a framework for a blog… and no, it isn’t what I would write, and no, I would not necessarily use it – but I work in a regulated environment that relies on my decision making, my ability to problem solve and be pragmatic. But if I worked in a different environment, perhaps AI’s progress would have me feeling threatened too.

“ARE DATA PROTECTION PROFESSIONALS BEST SUITED TO IMPLEMENT AI?”

My challenge is not with the advancement in technology — in fact, I welcome it, and I think it is important that we understand it and embrace its potential. But that precisely is my challenge at the same time: I am a data protection professional, not an AI one. I’m working hard to get up to speed and have taken seats at talks, attended webinars, and enrolled on numerous courses to aid my own professional development. But should AI really be falling into the laps of DPOs, consultants, and other privacy professionals? 

At the moment, it fits — I see how some of these issues are ending up in my inbox, and I can understand why I am being asked to attend meetings that are discussing AI implementation. But is that because there is nobody that knows any better, and the DPO seems the best fit? Likely.

“DO YOU CONSIDER THE TECHNOLOGY A DATA PROTECTION RISK?”

The language used is synonymous with that in the UK (and EU) data protection legislation, the conversation is revolving around the risks to the users… sounds like a data protection issue. But there is so much more to consider. I have no qualms admitting I do not (yet) understand the technology behind self-driving cars, advanced chat bots, and the infrastructure behind them, but I do understand that there are risks around the accuracy of information, the ownership of the AI outputs, biased programming, and unclear regulation. All of which surely should be considered in those conversations that I am party to.

“WHAT EFFECTS WILL AI HAVE ON PEOPLE AND THEIR DATA RIGHTS?”

So, where does that leave us? Who is, or should be, responsible? I don’t know the answer. Yes, as privacy professionals, we can advise on the use of personal data – defaulting to the GDPR’s principles, assessing the risk to the data subject, and highlighting any potential compliance hazards. But is that enough? I can only deduce that the best way to make these decisions is to have a Computer Scientist, a DPO, and some sort of ethical expert in the room. But realistically, how many organisations have those sorts of resources, or even the finances to facilitate that? Of course, there are some — the behemoths of the tech industry are already there, making leaps in how they can use AI to leverage more and more value from their customers. But was the original point of OpenAI not to make AI accessible to the masses?

CONCLUSION

Until we have clear regulation, I suspect that the vast majority of these questions, and my apprehension, will remain. So, for the time being, I will continue with the learning. I will continue to attend the meetings, have the conversations, and be as honest about my position on all of this as possible. But I look forward to there being clearer regulation, definitive responsibility, and direct accountability.

related posts

Get a Free Consultation