AI and Data Protection

Artificial intelligence technology has been dominating public discourse globally in the last few weeks.

ChatGPT has reached an estimated 100 million users a mere two months after launch—with copywriters everywhere quaking in their boots—even Google is now integrating its own AI software into its products. Sharing the spotlight with conversational AIs like ChatGPT, AI software for generating images and videos have taken over social media (who wouldn’t want to see themselves as a Renaissance portrait?). There’s even software which can very accurately imitate any person’s voice, after processing some samples. 

The potential for such technology to be utilised for both benign and not-so benign purposes is clear. At an organisational level, many executives have initiated efforts to implement AI tools into their operations. These tools range from the relatively simple such as customer service chatbots, to more complex projects that automate application screening, claims assessment and other functions. 

All these tools have one thing in common – they rely on data for their creation and continued operation. That data often includes personal data; information that could identify individuals,  bringing them under the purview of privacy and data protection laws such as the EU General Data Protection Regulation and the UK equivalent. 

If your organisation is planning to implement any such tools, here are three of the most important factors to evaluate, to ensure your organisation remains compliant with the law:

Lawful Basis and Documentation

The UK GDPR expressly forbids the use of completely automated systems to make decisions that have legal or other significant effects on individuals. There are exceptions; when it is necessary for the performance of a contract with the person, when they have provided their consent, and when a law specifically authorised such processing. 

The first thing an organisation must get right is their lawful basis for processing any personal data that will be used by an automated system, lest they fall foul of GDPR compliance (or other data protection laws). 

There must be clear documentation reflecting the assessment that an exception under Article 22 applies—or Article 9—if special category data is to be processed. The system in use must also be reviewed using a data protection impact assessment to evaluate the software and any third-party providers, ensuring the proper protection of data subject’s rights. 

To comply with the accountability principle, all assessments must be reflected in the organisation’s Record of Processing Activities. Data subjects must be informed of the presence of automated decision-making processes in your privacy notice, as well as their right to request further information or object to the processing of their data. 

Accuracy and Security 

Organisations should bear in mind the accuracy of the AI technology they use and the decisions it makes. The risks with ADM often include the possibility of wrong and/or biassed decision-making, indeed authorities have historically clamped down on such cases. Uber was court ordered to reinstate five British drivers who were automatically ejected–by its algorithm–from the platform. Over the pond, New York City law restricting the use of AI tools in the hiring process is slated go into effect in 2023.

The issue (and the solution) stems from the collection of data. Organisations must ensure they collect the minimal possible data and they must have clear retention and disposal procedures and policies in place. The risk is increased exponentially with systems that rely on third-party software-as-a-service providers. It was recently revealed that Microsoft staff had been reading the messages sent to its Bard AI by users. Although this was for the purpose of quality control—according to Microsoft—it still means that the confidentiality of personal data, including potentially special category data, could have been breached.

Article 32 of the GDPR requires organisations to implement appropriate technical and organisational measures to protect personal data. Options include pseudonymisation and encryption, as well as regular penetration testing. Having strong data processing agreements with processors and joint controllers, alongside regularly reviewing and  auditing their operations for compliance, would also be essential.  

Conclusion

Artificial intelligence and any sort of robotic process automation are only as ‘intelligent’ as the instructions coded into them. The ultimate power over how they process the data fed to them lies within your organisation, as does the ultimate responsibility for any mishaps that might occur. 

To remain compliant, minimise risk and protect your organisation, staff and service users, you must follow the principles outlined in Article 5 of the GDPR. Incorporate data protection by design and default at every stage of any project to introduce AI tools into your operations.

Unsure of where to start? Consider a data protection audit, or outsource your data protection with one of our DPO officers. 

If you’d like to learn more about everyday data protection, read out latest article: 5 Everyday Tips For Protecting Your Data.

related posts

Get a Free Consultation