QUARTERLY CLIENT NEWSLETTER - MARCH 2025

As a token of our appreciation for your continued work with us, this is the first of our new quarterly DPO customer newsletter.

In it, you’ll find news, advice, and articles about all things data protection; AI, cybersecurity, subject access requests, you name it. This content has all been contributed by various members of our team across different departments so that you’re getting the full range of perspectives here at DPAS. We hope you find this useful and, please let us know if there’s anything you’d like us to include in our next edition.

But for now, enjoy the read!

 

 

There’s a data breach! What do you do?

Picture this: It’s Friday afternoon, and you’re just about to shut down your PC and dive into your weekend plans when ping! – an email lands in your inbox with the dreaded subject line: “Data Breach”…

Leave it until Monday? Not an option! You know that the clock is ticking, and if the breach meets the reporting threshold, it must be reported without undue delay and within 72 hours of you becoming aware of it.

So, how do you manage the situation efficiently while still making it out the door in time for the weekend?

Here are some quick tips to help you stay on top of things:

Act fast – As soon as a breach is detected, contain it to prevent further exposure.
Assess the impact – Identify what data was affected, who is impacted, and the potential risks.
Record everything – Log the breach details in your internal incident register, even if it’s minor.
Notify if required – If the breach poses a risk to individuals, report it to the ICO within 72 hours and inform affected individuals if necessary.
Mitigate the risk – Take steps to prevent further damage, such as resetting passwords or revoking access.
Learn and improve – Review what went wrong and update policies, training, and security measures to prevent future breaches.
Stay calm — Don’t panic—just follow the process.

 

 

 

 

 

A look back on past attitudes toward SARs

In 2000, I received my first Subject Access Request working for a large organisation. It said the expected: “I would like a copy of everything your organisation holds about me, please.”

The organisation I worked for had 52,000 employees and four manufacturing sites in France, Spain, Germany, and the UK. It was created in 1960 and had morphed into different named businesses over that time, so there was a lot of information held.  Email was held in vast quantities by almost every user so to get a request for “everything” seemed a little daunting.

My DPO position was a new role and I was creating policies and procedures for everything around data protection – there was nothing in place, so I wrote back to the person and said “can you give me an area to search in particular, or named managers to approach?”

The requester duly did, and off I went in search of material which might fit the scope of the request.

Those I approached either told me they didn’t have time to look at their information or they said they would and I had to chase them, and chase them, and chase them…

I’m glad to say that working for different organisations since that time has shown that subject access requests are taken far more seriously now and handled with the gravitas they should be.

Social care records are a massive area where SARs are received all the time and there is so much legacy data that some of the results of a search can feel quite awkward or cringy nowadays. Whilst we can’t change what was written down years ago, we can be mindful and think about the distress reading such documents may cause and alert the requester in some instances.

There was a programme made by the BBC called “Alma’s Not Normal” – it was a comedy about a character who had been in social care. Unfortunately, the title of the programme was what had been written on her records which she asked for under a subject access request. When I watched the programme, it really brought it home to me that where personal data is concerned we ALL should be aware of what we are writing down, recording, repeating.

SARs are a right of every individual to any organisation – as data protection professionals, we know that what is given has to be factually about the person asking. Sadly, not everybody thinks about receiving a request when they are writing emails or compiling reports.

 

 

 

 

 

Pay or consent / pay or okay

By this point, every internet user will be overly familiar with the ‘cookies’ consent banner. Brought forward by the GDPR and PECR in combination with an increased awareness of personal privacy, the hurdles of consenting, rejecting, opting out, and declining cookies and permissions can be such an obstacle that it can drive users away from certain websites.

Pay or consent (or ‘Pay or Okay’) is a relatively new approach to obtaining cookie consent that could replace the standard banner; one that is gaining ground, particularly in the UK. This is essentially ‘Pay’ to enter a website, buying a subscription or membership, and as part of the offering limit the collection of personal advertising cookies to the user’s browser (purportedly limiting data collection and improving personal privacy), or ‘Okay’ the collection of your personal information and the depositing of various cookies on to your browser. Of course, the third unspoken option is leave and decide not to use the website’s services at all.

What does the ICO say?

‘In principle, data protection law does not prohibit business models that involve consent or pay. However, any organisation considering such a model must be careful to ensure that consent to processing of personal information for personalised advertising has been freely given and is fully informed, as well as capable of being withdrawn without detriment.’

The ICO has produced guidance to assist organisations with the deployment of these models, in which they focus on four key points. These are the consideration of the power imbalance between the service provider and user, the fee being appropriate to the use, the service offered for the two options being fundamentally equivalent, and of course that the service adheres to the principles of ‘privacy by design’.

Is this freely given consent?

The GDPR mandates that consent must be freely given, specific, informed, and unambiguous. However, the ‘pay or consent’ model raises questions about the extent to which consent can be considered freely given when a financial penalty is attached to refusal. While users technically have the option to not use the service, this argument weakens when applied to essential services like utilities or healthcare. In these cases, a ‘pay or consent’ model could exert undue influence on users – particularly those with limited financial means. It is a blurring of the line between freely given consent and coercion, a problem which is sure to raise alarms at regulatory offices about the validity of consent.

Cookie banners, as they currently stand, often utilise excessively complex and technical language, language that the average user may struggle to comprehend. The practice of bundling consent for a multitude of processing activities under a single ‘okay’ option further exacerbates this issue, obscuring the specific implications of data processing and arguably clouding the elements and information required to gain specific and unambiguous consent.

Ethics

The emergence of ‘consent or pay’ models introduces additional complexities and potentially amplifies existing concerns. The introduction of a financial cost for exercising the right to refuse consent may create an element of coercion. There is some concern that this could constitute a compromising of the freely given nature of consent and a potential undermining of its validity under the GDPR. The power imbalance between users and certain online service providers, coupled with the potential for manipulative design practices, also raises concerns about the fairness and transparency of these models.  

The ICO guidance acknowledges that ‘consent or pay’ models can be implemented with an ‘appropriate fee,’ but emphasises a case-by-case assessment considering the specific circumstances of the service and its users. For instance, a fee that might be suitable for a newspaper website could be excessive for an online game due to differences in the power dynamic between the service provider and the user, as well as the perceived value of the service.

The normalisation of the commodification of personal data

While organisations may choose to introduce a ‘pay or consent’ model as part of a ‘privacy by design’ approach to product development, the model has the potential to lay the groundwork for what could become a discriminatory two-tiered system where privacy becomes a privilege for the fee payer rather than a fundamental right. It is the beginnings of an explicit valuation of the monetary worth of personal data, particularly if a controller varies the fee based on the ‘quality’ of the data that it might collect through personal advertising cookies. Of course, some individuals may wish to benefit from the sale of their own personal data, but the valuation of it intrinsically also values data belonging to (arguably the majority of) individuals who may not wish to do so.

The European Data Protection Board (EDPB) has expressed concerns about the GDPR compliance of these models, and it is possible that UK regulators may follow suit in the future. Organisations should, therefore, exercise caution and consider the potential implications of relying on ‘pay or consent’ models for obtaining user consent.

The legality and ethics of ‘pay or consent’ models seem to be accepted in the UK at the moment, but still remain an evolving practice. The potential for these models to exploit vulnerable users, exacerbate existing inequalities, and undermine fundamental rights necessitates ongoing scrutiny and a cautious approach. As the ICO currently is accepting of the use of ‘pay or consent’, an organisation that wishes to use the model will need to consider user rights and, as always, gain a nuanced understanding of the ethical implications of a new leveraging of personal data.

 

 

 

 

 

Human eyes vs AI

There are many tasks critical to the overall goal of safeguarding personal data, redaction being one of them. This has always been quite a time-consuming and sometimes tedious activity – the exact sort of thing that modern technology is evolving to help us tackle. So with the rise of AI powered systems, the question presents itself: should we rely on automated solutions or stick to traditional manual human redaction? Here are the pros and cons of both approaches, to help you make informed decisions for your organisation.

The Human Touch

The traditional method of manual redaction involves an actual reviewer, as human as you and I, meticulously examining documents and obscuring sensitive information. While seemingly archaic in this new “age of automation”, it remains a valuable practice.

Pros:

1. Humans better understand context, nuance, and subtle relationships between data points. This is crucial for identifying information that might be sensitive in a specific situation, a feat AI often struggles with.
2. Humans can make nuanced judgments, avoiding the over redaction that AI systems sometimes produce when faced with ambiguous data.
3. For documents with intricate layouts, handwritten notes, or unusual data formats, human review remains indispensable. AI systems may falter where human adaptability shines.
4. Humans can adapt to unexpected variations in data and document formats, troubleshoot, and correct, a vital talent that machines can’t quite match.

Cons:

1. Manual redaction is a labour intensive process, especially for large volumes of data.  
2. Human error is inevitable, especially during repetitive tasks. Fatigue and distraction can lead to missed redactions, creating compliance risks.
3. Different reviewers may apply redaction rules inconsistently, leading to compliance issues, potential legal challenges, and data breaches.

AI Involvement

AI redaction systems use machine learning algorithms to identify and redact sensitive information automatically. This offers compelling advantages, but also brings challenges.

Pros:

1. AI systems can process vast amounts of data quickly, significantly reducing redaction time and freeing up human resources.
2. Automated systems minimise the risk of human error associated with manual redaction.
3. AI can easily handle large volumes of data, making it ideal for organisations with high redaction demands.

Cons:

1. AI systems may struggle to understand context and nuance, leading to missed or excessive redactions.
2. AI algorithms can misinterpret data, leading to false positives (redacting non sensitive information) or false negatives (failing to redact sensitive information).
3. AI systems rely on training data, which may not always be representative of all data types, potentially leading to bias and inaccuracies.
4. AI systems may struggle with unexpected variations in data and document formats, requiring constant updates and refinements.
5. Implementing and maintaining AI redaction systems can be expensive, requiring significant investment in technology and expertise.
6. Although AI systems have the ability to consistently recognise patterns, they will not be able to pick up on inconsistencies within the data.

Risks Involved with AI Redaction

While AI redaction can be an efficient alternative, you must also acknowledge the risks involved when processing personal and sensitive data:

1. AI models, even with anonymisation efforts, can inadvertently reveal sensitive information. Data breaches expose personal data, leading to identity theft and reputational damage. AI driven profiling and surveillance raise ethical concerns.
2. AI models trained on biased data perpetuate those biases, leading to unfair outcomes in loan applications, employment, and criminal justice. Sensitive attributes like race, gender, or religion can be misused, resulting in discrimination.
3. AI systems are vulnerable to cyberattacks, compromising personal data. Adversarial attacks can manipulate AI outputs, leading to data breaches. Insider threats pose a risk if individuals with data access misuse it.
4. The complexity of AI models makes it challenging to understand their decision making process. This lack of transparency hinders accountability and makes it difficult to audit AI systems for ethical and responsible use.
5. Non compliance with data protection regulations like GDPR can result in hefty fines. Individuals can take legal action against organisations misusing their data. The use of AI with personal data raises legal questions around discrimination, privacy, and other rights.
6. Data breaches and privacy violations severely damage an organisation’s reputation and erode public trust.

Finding the Right Balance

The optimal approach often involves a hybrid model, combining the strengths of both manual and AI redaction.

1. Use AI to perform initial processing, identifying the correct data sets, and suggesting basic redactions, streamlining the process.
2. Employ human reviewers to verify AI’s initial redactions, ensuring accuracy and contextual understanding, mitigating the risks of false positives and negatives.
3. Use manual redaction for complex documents or situations requiring a high degree of contextual awareness, leveraging human adaptability.
4. Regularly review and refine both manual and AI redaction processes to improve accuracy and efficiency.

What do we suggest?

Assess your organisation’s redaction needs and data volumes to determine the most suitable approach.
Evaluate the accuracy and reliability of AI redaction systems before implementation.
Develop clear redaction policies and procedures to guide both manual and AI driven processes.
Provide adequate training to staff on both manual and AI redaction methods, ensuring competency and compliance.
Implement robust quality control measures to ensure accuracy and minimise risks.

By carefully considering the strengths and weaknesses of both manual and AI redaction, and understanding the specific risks associated with AI and personal data, you can develop a robust and efficient redaction strategy that meets your organisation’s needs.

 

 

 

 

 

The roles women play in our sector

In honour of International Women’s Day being in March, I thought it would be a pertinent time to mention the important role that women play within our sector.

International Women’s Day (IWD) is celebrated annually on 8th March to recognise the social, economic, cultural, and political achievements of women across the world. It is also a day to raise awareness about gender equality and advocate for women’s rights.

Each year, IWD has a theme that highlights specific issues affecting women. The United Nations chose the 2025 IWD theme as “For ALL Women and Girls: Rights, Equality, Empowerment.” This theme emphasises the importance of ensuring that every woman and girl, regardless of background or circumstance, has equal rights, opportunities, and the ability to empower themselves. Closely aligned with DPAS’s motto of ‘Engage, Educate, Empower’.
A cause that has a great overlap with what we are trying to achieve within the data protection industry – protecting the rights and freedoms of our data subjects. The inequality that is often observed in a more physical sense is also evident within data too.

Women’s data is often underrepresented in datasets, leading to biased algorithms in areas like recruitment, access to healthcare, and policing, amongst others. IWD promotes discussions on ethical AI use and fair data processing practices to ensure gender equity. Alongside advocating for women’s digital rights to protect against predominantly female victim based risks such as cyber-stalking, revenge porn, and online harassment. 

Historically, women are underrepresented in data protection and cyber security roles (although there has been a promising shift in this in more recent years). IWD encourages more women to enter careers in tech, and law, advocating for women to contribute towards innovative technology, and legislative updates.

  • Women constitute approximately 24% of the global cybersecurity workforce (ISC2).
  • In the UK, women make up 17% of the cybersecurity workforce, a decrease from 22% in the previous year (Gov).
  • IAPP found that there is a 50/50 split in data protection roles, however, only 20% of leadership positions were held by women.
  • 76% of women in tech have experienced gender bias or discrimination in the workplace (Women in Tech)

Diverse representation in data protection roles is crucial for developing inclusive policies and addressing the unique concerns of different demographics. Women play a pivotal role in shaping standards, ensuring that frameworks are comprehensive and cater to all individuals (Women Tech Network).

So, whilst there is still some work to do, there are also some notable recent events that demonstrate that we might just be moving in the right direction:

CyberFirst Girls Competition

The National Cyber Security Centre (NCSC) recently hosted the CyberFirst Girls Competition at the Jodrell Bank Observatory. This annual event aims to inspire and encourage young girls to pursue careers in cybersecurity by engaging them in a series of challenges designed to test their problem-solving and technical skills.

A record 4,159 teams from more than 800 schools signed up to the NCSC competition.

Rolls-Royce’s Commitment to Inclusion

Rolls-Royce has implemented a series of initiatives to promote diversity, inclusion, and belonging within its workforce, aiming to provide a psychologically safe environment for its employees. One such initiative, “Being Like Me,” allows employees to share personal stories on the company intranet, fostering openness and understanding among colleagues. The aerospace giant has been actively working to improve gender representation, achieving gender parity on its board and increasing the number of women in executive roles.

Cisco’s Leadership in Workplace Equality

Cisco stands out in the tech industry for its commitment to diversity, equality, and inclusion, earning the top spot on the FT-Statista list of European Diversity Leaders. Cisco ensures pay parity by annually analysing and adjusting salaries for fairness across gender, race, and ethnicity.

Real Progress Needs Diversity

These initiatives not only promote gender equity but also strengthen the tech and data protection industries by creating diverse perspectives. While progress is being made, true change requires a collective effort from everyone, regardless of gender, role, or seniority.

Each of us has a role to play in building a more inclusive and equitable sector (and world). This can be as simple as advocating for fair recruitment practices, ensuring that women’s voices are heard in meetings, mentoring and supporting women entering the field, or championing policies that promote equal opportunities.

Leaders and organisations must also actively work to create environments where women can thrive, ensuring that diversity isn’t just a statistic but a fundamental part of company culture.

So, although we have a dedicated ‘International Women’s Day’ let’s remember that equality isn’t about one day a year, and that inclusion isn’t just about representation, it’s about creating spaces where all individuals, regardless of gender, background, or experience, feel valued, empowered, and supported.

 

 

 

 

 

High Court Ruling on Targeted Marketing and Vulnerable Consumers

A recent High Court judgment (RTM v Bonne Terre Ltd [2025] EWHC 111 (KB)) has raised significant questions about the legality of targeted marketing and data processing practices, particularly concerning vulnerable individuals.

The case was brought by RTM, an anonymised claimant and recovering gambler, against Sky Betting and Gaming (SBG). RTM alleged that SBG unlawfully collected and processed his personal data to deliver highly targeted marketing, using profiling algorithms to track his gambling behavior and serve him personalized advertisements. He argued that this marketing exacerbated his compulsive gambling disorder, causing substantial financial and emotional harm. Moreover, he contended that SBG lacked a lawful basis for processing his data, as he had not provided legally effective consent.

Between July 2017 and early 2019, when SBG eventually suspended RTM’s account, he was subjected to persistent and highly personalised marketing, including:

  • Frequent email campaigns, with 114 emails sent in August 2018 alone
  • Targeted social media advertisements on platforms like Facebook and X
  • Personalised offers tailored to his gambling behaviour

These strategies were specifically designed to be frequent and compelling, making them difficult to ignore. RTM’s claim raised fundamental concerns about the ethical boundaries of personalised marketing, particularly when directed at individuals with gambling addictions.

Court Findings and Legal Implications

Delivering her judgment, Mrs Justice Collins Rice conducted a detailed assessment of the case under the Data Protection Act 1998, the Data Protection Act 2018, UK GDPR, and the Privacy and Electronic Communications Regulations (PECR).

The court found that:

  • SBG’s use of cookies for personalised direct marketing was unlawful.
  • SBG’s direct marketing to RTM did not meet the legal requirements for lawful data processing.

A key aspect of the ruling centred on RTM’s consent. The court determined that his gambling addiction compromised his autonomy, meaning that his consent to data processing for marketing purposes was not legally effective. This decision reinforces the need for stricter safeguards in the use of personal data for targeted advertising, particularly when dealing with vulnerable individuals.

A Shift in the Consent Framework?

The judgment suggests that a subject’s vulnerabilities and disabilities must be considered when seeking consent for data processing. This marks a potential shift in the way consent is obtained, particularly for industries reliant on targeted advertising.

For individuals who may be vulnerable, such as those with gambling addictions, the process of securing legally valid consent becomes more complex. Businesses would likely need to implement more rigorous and individualised consent mechanisms to ensure that an individual’s ability to make an informed and autonomous decision is not compromised.
However, this could result in a practical and administrative burden on advertisers, potentially discouraging them from relying on consent as a lawful basis for data processing. As a result, companies engaging in targeted marketing, particularly within high-risk industries, may need to reassess their approach to obtaining consent, ensuring compliance with stricter legal and ethical standards.

Although, imposing a greater burden on advertisers to prevent them exploiting vulnerable individuals as in this case seems to be a win for data protection laws in this writer’s eyes. Indeed, it sets a positive precedent for safeguarding vulnerable consumers, encouraging businesses to adopt more ethical and transparent marketing practices.

 

 

 

 

 

The rise of Deepfake technology

If I had a pound for every time someone asked me ‘What do you actually do?’ – I’d probably be retired. But here I am, writing this instead. I used to brush the comment off with a sweeping statement that minimised what my role actually meant – usually focused on law or legal compliance, and move on to talking about someone’s ‘more interesting’ job. But, I was early in my career and still getting a handle on the fundamentals – my focus at the time was learning the legislation, the environment, and the practicality.

Fast forward a ‘few’ years, and now I find myself in a place where actually, I would never describe my career as being about legal compliance. At some point a switch flicked and I realised my role was about advocating for rights, and trying to educate others about the real-life implications of that legal compliance ‘stuff’.

So, I am going to make a point of writing something that answers the ‘What do you actually do?’ in a way that I hope resonates with more people than my old ‘legal compliance’ answer. In a way that hopefully makes you see why I think it is so important, and why it matters more than so many people realise. 

Artificial Intelligence

I know what you’re thinking – another AI blog. But stick with me. 

It is impossible to escape the AI conversation these days, even my nan was telling me a story not long ago about ‘AI Granny’ – created by O2 to effectively waste scammers’ time; she had seen a feature on it on the ITV show ‘This Morning’. My friends all have the ChatGPT app to hand, and every other client I speak to is asking about how they can implement AI technology in their organisation.

Now, I am all for innovation, I am all for advancements, I am all for utilising technology to streamline work flows, to access more knowledge than ever, and frankly, to have a bit of fun. However, like all things, not everybody is singing from the same hymn sheet.

I would say 80% of the people I speak to, in any capacity, about AI, have no idea about the darker side of the technology being created, or the risks that they present. Specifically the risk to us as individuals. I spend a lot of time talking about risks within businesses, and often don’t have the opportunity to talk about the more day-to-day risks that some of this technology introduces.

How it can affect you

There is a plethora of technology available at the click of a button – websites to visit, apps to download. Whilst most of us can access them, they are not all designed to be utilised by ‘everyone’.

Those most at risk of this technology harming them? 

Women.

Let’s look at Deepfakes for example… 

What is a Deepfake?

A deepfake is a type of artificial intelligence generated media that manipulates or replaces someone’s likeness in videos, images, or audio recordings. While deepfakes can be used for entertainment, satire, or filmmaking, they have also been linked to misinformation, identity fraud, and other unethical uses. 
Effectively they are videos (most commonly), that look ‘real’ but they aren’t. You have probably seen the Deepfakes of Joe Biden, and Trump, even if you didn’t realise you have.

The BBC also recently produced a drama called ‘The Capture’ that centres around the use of Deepfakes, it’s worth a watch, but a TV programme that so many would view as a James Bond-esque, futuristic, look at what technology ‘might’ do, is actually a very current look at what technology CAN do.

The TV show has a political theme, but one of the most prevalent unethical uses of this technology is more wide reaching: Deepfake porn.

Yep, you read that right.

What is Deepfake Porn?

Well, exactly what it says on the tin. Pornographic content that is created using deepfake technology. So now that fake video could feature you engaging in sexually explicit acts – nice right? Just another thing to add to the list of ‘crap things on the internet’.

In 2023 an analysis of deepfakes online revealed that 98% of deepfakes are pornographic, and 99% of the victims are women.

You might be thinking that it’s a depraved joke reserved for celebrities, influencers, and those in the public eye, although the likes of Taylor Swift, Jenna Ortega, Alexandra Ocasio-Cortez and Georgia Meloni have all been victims of this hideous ‘trend’, there are thousands of ‘normal’ women that have also had the unfortunate experience, many of which are likely unaware that there are falsified images of them circulating the dark corners of the internet.

This isn’t technology that is exclusive to the illusive ‘Dark Web’ or secret circles, it is only a Google away from absolutely anyone being able to access it, you do not need any technical expertise, and knowledge of artificial intelligence, image manipulation or editing, you simply need to know how to use a web browser.

There are websites, and downloadable apps, that solely exist to ‘nudify’ women. You simply upload a photograph of someone, and the technology does the rest, producing you a photograph, or video if you choose, of the individual, undressed. You may be thinking ‘well, you can upload an image of a man’. You’re right, you could, but the result would be a man’s face superimposed on a woman’s body – what a display of equality… sigh.

After creation, these images/videos find their way to dedicated forums and groups where users share their ‘art’, tips on improving outputs, and also share their own lewd content demonstrating how they enjoy the hideous imagery they have created with no consent from their victims.

There are a multitude of reports from young girls that have been victims of this sort of abuse, some are just teenagers. Many are unaware of the content until contacted by friends and acquaintances – just imagine getting that phone call, or opening that text. I can remember being around 18 and a local girl’s photographs had been posted on Facebook following a break-up, there was more chatter of ‘have you seen this?’ than ‘is she okay?’ – and that is the problem.

Data from UK based ‘Revenge Porn Helpline’ shows image-based abuse using deepfakes has increased more than 400% since 2017.

I find it hard to believe that we are living in a world where technology is being used to improve cancer survival rates, wildlife conservation, food waste reduction, and humanitarian aid, that there are huge numbers of individuals that would rather contribute their time and energy to creating life-damaging images of women.

The reality is that this is a data protection issue. An issue that my job role quite literally exists to prevent.

The Good News

The good news is that there is a genuine appetite to stop it. 

The UK’s Online Safety Act 2023 amended the Sexual Offences Act 2003 to criminalise the sharing of intimate images that appear to depict another person without their consent, encompassing deepfake content. However, the production of the material remained a legal grey area, prompting further legislative proposals to close this gap.

In January 2025, the UK government announced plans to criminalise the creation and distribution of sexually explicit deepfakes without consent. This move aims to directly address the growing misuse of AI technology to produce realistic but fake intimate content, which has disproportionately targeted women and girls.

Following advocacy efforts, the government reconsidered an amendment to the Data Use and Access Bill (DUAB) that would have previously required victims to prove the perpetrator’s harmful intent in deepfake cases. This shift underscores the importance of consent-based laws to protect victims effectively.

There are some tremendous women working tirelessly in this space, for anyone interested in further reading I encourage you to look at Clare McGlynn’s work on image-based abuse, specifically deepfake porn – her website provides an array of information and links to key resources and articles that dive deeper into the epidemic than my role will ever allow. 

Although progress is being made, and recognition from key individuals is growing, we cannot afford complacency. The reality is that technology is advancing faster than legislation.

Conversations like this, acknowledging the risks, challenging the misuse of AI, and advocating for stronger protections, are crucial. The fight against AI fuelled image-based abuse isn’t just about law; it’s about changing attitudes, demanding accountability, and ensuring that innovation serves us all in a safe and ethical way.

So, for those of you that want to know why I chose to stick with a career in data protection after years (and thousands of pounds) of university level education in a different field, or want to know ‘what do you actually do’, this is why, and this is what – I get to be a part of an industry that is contributing towards change in areas you would never imagine, that genuinely impact all of us on a level that none of us could have predicted years ago.

 

 

 

 

 

The DSP Toolkit submission deadline: Are you getting ready?

For those that are required to make one, the deadline for the Data Protection and Security Toolkit (DSPT) submission is fast approaching, and now is the time to ensure that you are on track to make a successful submission. The deadline for this year is the 30th June 2025, and with this in mind, it’s crucial to review your progress, finalise any outstanding actions, and confirm that your evidence is up to date. Whilst we are currently in March and June may seem far away, we all know how sometimes little tasks can take a couple of months to finalise.


For DPOs, SIROs, and teams handling the DSPT process, this is a key moment to double-check compliance with the latest data security standards. These have been updated since last year, and so it is crucial you make yourselves familiar with these new standards. Whether you’re submitting for the first time or maintaining your status, taking a structured approach can help ensure a smooth and stress-free submission.

 

Here’s a quick checklist to help you prepare:

 

  • Review your DSPT status – Make sure all sections are completed and up to date.
  • Gather necessary evidence – Ensure you have supporting documents for policies, procedures, and training records.
  • Engage with key stakeholders – IT, HR, and senior management may need to provide input or sign-off.
  • Check for updates – Any changes in guidance or requirements since your last submission?
  • Allow time for final review – Don’t leave submission until the last minute!

 

Submitting your DSP Toolkit not only helps you meet regulatory obligations but also demonstrates your commitment to protecting sensitive information. If you need support or guidance, now is the perfect time to reach out and ensure everything is in place. 

You should also consider getting your DSP Toolkit submission audited, as this is a great way to gain outside, non-biased expertise on the standard of your submission.

 

 

 

 

 

Thank you for your support

As we move into the next quarter, it’s clear that the data protection landscape is changing – particularly with the rapid development and deployment of AI across all sectors. From healthcare to education, finance to retail, AI is transforming the way organisations operate. But with innovation comes increased scrutiny, and we’re anticipating a potential shift in regulatory expectations as lawmakers and regulators respond to these technological shifts.

We’re also closely following the progress of the Data Use and Access Bill, which is now at the report stage after the second reading in the House of Commons. With the third reading expected soon, we want to reassure you that we’re monitoring every development carefully. The current feeling across the industry is that it is due to be finalised just after Easter. Any changes that could affect your organisation will be shared promptly, along with clear, practical guidance on what actions may be required.

As your expert advisor and/or DPO, our role is not just to react to changes, but to help you stay ahead of them. We’re committed to ensuring you feel confident in your organisation’s approach to privacy, governance, and compliance – even as the regulatory landscape continues to shift.

This quarter, we’ll be sharing new resources via the ticketing system, hosting free webinars, and providing tailored support to help you adapt. We are here to support you at all times with anything relating to data protection compliance. Also, don’t forget, as a client of DPAS you are entitled to discounted rates on any of our training courses, so if you need some bespoke training, or need to send a member of your team on a specific course, just let us know.

Thank you for your continued support and we hope that you find our resources useful.

 

— Melanie Garnett, CEO