DPAS Data Protection Bulletin – March 2026

dpas bulletin - march 2026

Welcome back to our monthly DPAS bulletin, where we cover the latest data protection news and developments from around the world.

Who is really watching through our Meta glasses? Can we trust “Privacy by Design”? What is the price of misinformation? Can the UK government fix a cyber vulnerability in a week? Will the global regulatory stance on deepfakes force technology companies to listen? 

Read about all this and more in our latest DPAS Data Protection Bulletin.

Meta’s AI smart glasses sends your data to Kenya

If anyone genuinely believed the tagline “designed for privacy, controlled by you” from Meta of all companies, then this one may surprise you. Sama, a Nairobi based subcontractor, employs thousands of people to label objects that the video feature of the glasses records. This helps to train the AI feature of the glasses to recognise objects, such as the sensitive documents you’re reviewing, or the children you’re caring for.

Meta have claimed that the video recordings would not be shared for training purposes unless expressly permitted by the user. However, anonymous whistleblowers have reported reviewing a variety of videos that no one would be permitted to see, ranging from sensitive legal documents to moments of physical intimacy. Some may question why people would wear those glasses in such situations, but fear not, Meta’s offering records even when you are not wearing them, so you can be assured they capture every detail.

Read more about this here.

Meta to pay damages after misleading users about platforms’ impact on mental health

Staying on the Meta train for this month, a New Mexico jury found that Meta enabled child exploitation on their platforms and misled people about the effect these platforms had on children’s mental health. A figure of $375 million for violations of the New Mexico unfair practices act has been the latest arrow in the quiver of consumer action against Meta in the States.

Meta, unsurprisingly, has appealed the judgement. New Mexico isn’t done with Meta though, and has gone in for a second bite of the cherry with a separate claim for public nuisance set for trial in May. This is just one of many of the cases levied against Meta across the States. Hopefully, they have better luck collecting money from Meta than their European counterparts.

Read more about this here.

Second AI safety report published

The International AI Safety Report 2026 has been released and provides a sobering global assessment of the systemic risks inherent in AI development. The report emphasises the dangers posed by “human-in-the-loop” training processes and the persistent lack of transparency in data supply chains, noting that “privacy by design” claims often mask aggressive data harvesting practices. 

Unsurprisingly, AI-driven platforms frequently prioritise engagement algorithms over the mental health and physical safety of vulnerable populations. If the first two articles didn’t erode your faith in technology giants, this serves as a wonderful pudding.

Read more about this here.

Cyber this, Cyber that

The UK government has launched a new Vulnerability Monitoring Service that has successfully reduced the median time to fix critical cyber weaknesses by 84%, cutting the window for potential exploits from 50 days down to just eight. This initiative, part of a wider effort to bolster the resilience of essential public services like the NHS, focuses on securing the Domain Name System to prevent attackers from redirecting users to fraudulent sites or intercepting sensitive communications. Considering how many cyber incidents arose from social engineering and phishing attempts last year, this is a welcome improvement.

To sustain these technical improvements, the government also introduced a dedicated Cyber Profession and a new Cyber Academy. These programs are designed to recruit and train top-tier experts through structured career pathways and apprenticeships, ensuring the public sector has the skilled workforce necessary to stay ahead of increasingly sophisticated digital threats.

Read more about this here.

Digital Omnibus gets a two-for-one on opinions

The EDPB and EDPS have released another Joint Opinion. This Joint Opinion, 2/2026, provides a critical assessment of the European Commission’s Digital Omnibus Package. For those not keeping up with the Digital Omnibus, this is a proposal designed to streamline the EU data protection rulebook and improve organisational competitiveness. Seems a familiar promise!

While they welcome the objective of reducing administrative burdens, the EDPB and EDPS warn that the current proposal risks introducing new legal uncertainty and could potentially reduce the level of protection afforded to individuals. There is a concern that some of the proposed changes might actually make EU data protection laws harder to apply in practice. 

Read more about this here.

Regulators issue joint call for AI companies to stop misusing personal data

The ICO has released a joint statement on AI-generated imagery as part of a global coalition of privacy authorities. The statement addresses the urgent risks of non-consensual and harmful synthetic content. They specifically target the creation of realistic images and videos depicting identifiable individuals without their knowledge or consent, which can lead to non-consensual intimate imagery and defamatory depictions.

The signatories call on organisations to implement robust technical safeguards to prevent the misuse of personal information and to provide transparent information about AI system capabilities. As previously reported, this has been one of the major issues with the EU’s AI Act thus far. Rightfully, the authorities call for a need for rapid removal mechanisms for individuals affected by such imagery. Where that mechanism will come from remains to be seen, but hopefully this indicates an appetite for enforcement where these situations develop.

Read more about this here.

Crime and Policing Bill to force takedowns of abusive images

Four days before the ICO issued the above, the UK government announced a significant amendment to the Crime and Policing Bill that requires tech platforms to remove non-consensual intimate images within 48 hours of being flagged. This new legal requirement is backed by substantial penalties, with non-compliant firms facing fines of up to 10% of their global revenue or the potential blocking of their services in the UK.

The initiative aims to ensure that victims only need to report an image once for it to be removed across multiple platforms and automatically blocked from future uploads. Furthermore, the government is reclassifying the creation or sharing of such images as a “priority offence” under the Online Safety Act, mandating that platforms treat this abuse with the same level of severity as child sexual exploitation or terrorism. While a noble purpose, given the implementation of other elements of the Online Safety Act it remains to be seen how effectively this will be implemented.

Read more about this here.

Grok spooks Ofcom into action

In line with the above, Ofcom has announced it will fast-track a decision on new protections designed to safeguard women and girls. Originally scheduled for Autumn, the regulator moved the announcement up to May 2026, citing an urgent need for more robust enforcement following a formal investigation into X’s AI chatbot Grok. 

A central component of these proposals is a requirement for tech platforms to block non-consensual intimate images at the source. This would be achieved through advanced proactive “hash matching” technology, which identifies the unique digital fingerprints of known abusive images to detect and prevent them from being shared across the internet. This move aligns with the government’s recent reclassification of intimate image abuse as a priority offence, ensuring platforms treat these violations with the same severity as terrorism or child exploitation material.

Read more about this here.

GET IN TOUCH WITH US!

If you need any support in ensuring your organisation is complying with the relevant legislation, or require training in the areas of data protection and information security, get in contact with us.

Either call us on 0203 3013384, email us at info@dataprivacyadvisory.com, or fill out our contact form. Our dedicated team will get back to you as soon as possible.

related posts

Get a Free Consultation