Insights Archive | Arctic Shores

AI Recruitment Compliance UK: What Every TA Leader Needs to Know Now

Written by Arctic Shores | Apr 8, 2026 6:48:54 AM

Introduction 

If you've introduced any form of AI or automation into your hiring process in the last 12 months — whether that's an AI screening agent, automated sifting, or even a new ATS feature — there's a good chance your compliance position has changed without you realising it.

The regulatory landscape around AI recruitment compliance in the UK is shifting quickly. The Data Use and Access Act 2025 came into force in February 2026, the ICO has named AI in recruitment as a priority enforcement area, and high-profile cases like the Workday and Eightfold class actions in the US are putting pressure on organisations everywhere to demonstrate that their hiring processes are lawful, transparent, and fair. For TA leaders already stretched thin by spiralling application volumes and pressure to automate, compliance can feel like yet another thing to worry about — but getting it wrong carries serious consequences.

On a recent episode of the TA Disruptors podcast, Arctic Shores co-founder Robert Newry sat down with Hussna Grimes, a senior commercial lawyer and privacy leader with over 18 years of experience helping organisations use data responsibly. Hussna has led privacy strategies at Channel 4 and ad tech firm Permutiv, and now co-leads Pure Privacy Consulting. In this conversation, she breaks down the areas that matter most for TA teams right now — from consent and lawful basis to automated decision-making, DPIAs, and practical steps you can take next week. Here's what you need to know.

Why consent is problematic in recruitment

Most TA leaders assume that getting a candidate's consent to process their data is sufficient. But as Hussna explains, consent is actually one of the weakest lawful bases you can rely on in a recruitment context — and the Eightfold case illustrates exactly why.

The core issue is the power imbalance. For consent to be legally valid under UK GDPR, it needs to be freely given. But candidates almost always feel compelled to agree to whatever is asked of them, because refusing might mean their application isn't considered. That pressure undermines the legitimacy of consent from the outset.

Then there's the transparency problem. In the Eightfold case, the allegation is that candidates didn't know their data was being collected and used to generate AI profiles that were then used to screen them out of roles. They had no opportunity to see or challenge those reports. As Hussna puts it: "If candidates don't know that data is being collected about them or used in this way, they can't consent."

The practical takeaway? Rather than asking "how do I get consent?", TA teams should be asking whether there's a more appropriate lawful basis — and in most recruitment scenarios, that's legitimate interest.

 

Legitimate interest vs consent: which should TA teams use?

Under UK GDPR, legitimate interest is generally the more appropriate lawful basis for typical recruitment processing. You have a clear business need — you're evaluating someone's suitability for a role — and that's a legitimate purpose.

But legitimate interest isn't a free pass. To rely on it, you need to carry out what's called a legitimate interest assessment (LIA) — a structured, three-step process that examines the purpose of your data processing, whether it's necessary, and whether your business interests outweigh the candidate's privacy rights.

Hussna recommends documenting this thoroughly. The factors you should consider include the reasonable expectations of the candidate, whether any invisible processing is taking place, whether special category data is involved, and what safeguards you have in place. This documentation isn't just good practice — it's your evidence if a candidate objects. Under legitimate interest, candidates have a right to object, and your LIA is what demonstrates you've thought it through carefully.

The key point for TA leaders is that this isn't a one-off exercise. If your processing changes — say you introduce a new AI tool or switch vendors — you need to revisit the assessment and confirm the balance still tips in your favour.

 

What is automated decision execution — and why does it matter?

One of the most useful distinctions Hussna draws in the conversation is between automated decision-making and what she calls automated decision execution. 

Automated decision-making, under UK GDPR Article 22, refers to situations where a system is solely making the decision — automatically rejecting candidates, for example, without any human involvement. This is heavily restricted. The lawful bases available are narrow (explicit consent or performance of a contract), and both are difficult to satisfy in a pre-hire context.

Automated decision execution is different. This is where a human defines the logic — deciding which skills matter, how factors should be weighted, what good and poor look like — and the system executes those human-defined rules consistently across a large candidate pool. As Hussna explains, the technology isn't making the decision about whether a candidate progresses. It's executing human judgement at scale.

The critical requirement is meaningful human involvement. This doesn't mean a human has to review every individual data point. It means humans set the criteria, can view and override outcomes, and can adjust the logic. This is an important distinction for any TA team using automated sifting tools — it's not the automation itself that creates risk, but the absence of documented human oversight in the design and governance of that automation.

 

How the Data Use and Access Act changes the rules

The Data Use and Access Act 2025, which came into force in February 2026, is the most significant update to UK data protection law since UK GDPR. For TA teams, the headline change is that it introduces legitimate interest as a lawful basis for solely automated decision-making — something that wasn't available under the previous framework.

This is a meaningful shift. It makes the legal environment for automated decision-making in recruitment more workable in practice. But Hussna is clear that it's not a blanket exemption. Two important conditions apply: you must have specific safeguards in place, and special category data must not be processed as part of the automated decision itself.

That second point is worth unpacking. It doesn't mean you can't collect special category data — like information about neurodiversity to apply reasonable adjustments. It means that data shouldn't flow into the automated decision-making workflow. Hussna's advice is to keep it separate from the system that's determining which candidates progress, so it can't inadvertently influence outcomes. This has real implications for how you design your recruitment process and data architecture.

 

Why your DPIA needs to be a living document

If there's one message that comes through most strongly in this episode, it's that the data protection impact assessment is far more important than most TA teams treat it. Hussna describes it as your "source of truth" — the key document in your entire risk assessment process.

A DPIA isn't a tick-box exercise you complete once and file away. It should map your entire hiring workflow, identify every point where AI or automation is involved, document the risks you've identified, and set out the mitigating strategies you've put in place. It demonstrates good governance to regulators, partners, and candidates. It acts as an audit trail. And it forces accountability across the organisation.

Crucially, the DPIA shouldn't sit solely with legal and compliance. Hussna argues the project owner — typically the head of TA — should be accountable for it, with legal, privacy, IT, and hiring managers all feeding in. Anyone who touches the recruitment system needs to contribute, because you can't successfully identify risks unless everyone is discussing how they plan to use the tools.

It's also a document that needs version control and regular review. Every time a feature changes, a new tool is introduced, or a use case evolves, the DPIA should be revisited. Hussna draws a parallel with information security governance — the same level of oversight and seriousness should apply to your data protection documentation.

For organisations that also operate in the EU, the DPIA can also be used to record AI-related risks relevant to the EU AI Act, potentially reducing the need for multiple overlapping risk assessments.

 

Vendor accountability: you can't outsource compliance

One of the sharpest warnings in the conversation concerns vendor relationships. It's tempting to assume that if something goes wrong with a third-party tool, the vendor bears the responsibility. Under UK and EU GDPR, that's not how it works. As the recruiting organisation, you are the data controller, and you are ultimately responsible for how candidate data is processed — regardless of what the vendor's system is doing behind the scenes.

Hussna highlights vendor due diligence as one of the most commonly overlooked risk areas. Before onboarding any recruitment technology, TA teams should be asking pointed questions: can the vendor explain how their system makes decisions? Are they using candidate data to train their own models? Where is the data stored — and does it leave the UK? Have they conducted bias testing or auditing, and can they provide documentation?

If a vendor can't explain how their software works, that's a red flag. And if their system involves scraping candidate data from the web or social platforms without candidates' knowledge, you're firmly in the territory of invisible processing — which raises serious questions about both lawfulness and fairness.

The practical step here is to involve your legal and privacy team early — ideally before you even begin an RFP process. Come to them with a clear use case, a workflow diagram showing how data flows in and out of the tool, and as much vendor documentation as you can gather. That way, compliance becomes a collaborative process rather than a last-minute blocker.

 

Key takeaways

  • Rethink consent as your default lawful basis. In recruitment, the power imbalance between candidate and employer means consent is rarely freely given. Legitimate interest is usually more appropriate — but it requires a documented assessment that you revisit whenever your processing changes.
  • Understand the difference between automated decision-making and automated decision execution. If humans define the criteria and logic, and the system executes those rules with meaningful human oversight, you're in a different position compared to if the system is making decisions autonomously.
  • Treat your DPIA as a living, cross-functional document. It should map your full hiring workflow, be owned by the TA project lead (not just legal), and be updated every time a tool, feature, or use case changes. It's your best defence if challenged.
  • Don't assume your vendor handles compliance for you. You're the data controller. If you can't explain how a tool makes decisions or where data goes, you're exposed. Ask the hard questions before you sign.
  • Map your process now. Draw out your hiring workflow from sourcing to onboarding, identify where AI and automation sit, and flag which vendors are involved. This is the essential first step to understanding your current compliance position — and it's something you can do this week.

Listen now 👇

This article covers the highlights, but there's much more depth in the full conversation — including how the Eightfold case translates to UK law, the nuances of international data transfers, and what happens when candidates exercise their right to object.

Listen to the full episode below.

If you're exploring how to design a compliant, AI-resilient hiring process that stands up to regulatory scrutiny, explore how Arctic Shores can help.


Transcript:

Robert Newry, Arctic Shores, Co-Founder and Chief Explorer

Husna Grimes, Pure Privacy Consulting, Co-Founder

Robert: Welcome to the TA Disruptors podcast. I'm Robert Nury, co-founder and chief explorer at Arctic Shores, the task-based psychometric assessment company that helps organisations uncover potential and see more in people. In the last 12 months, TA teams have been pushed to move faster, automate more, and experiment with AI throughout the hiring journey.

But the regulators and courts are looking much more closely at how organisations use data and protect individuals' rights. And high-profile cases like the Workday and Eightfold ones in the US show how important it is for TA leaders to understand how they comply with the current and new rules and regulations. To help us with that, I am delighted to welcome Husna Grimes.

A senior commercial lawyer and privacy leader with over 18 years of experience in enabling organisations to use data responsibly. I've worked with Husna over the last six months and I can personally vouch for her impressive expertise and commercial acumen. And while there are many people with such expertise, there are a few that I've come across who are able to explain and apply it as well as Husna does.

She's led privacy strategies for organisations like Channel 4, where she was the first in-house data protection lawyer, and then ad tech company Permutiv. She now co-leads Pure Privacy Consulting, helping companies with creative and compliant solutions that satisfy both regulatory standards and customer needs. And today, she's going to help us understand what TA leaders really need to know about AI, data protection compliance and the law.

Specifically, we're going to discuss consent and the lawful basis for collecting candidate data. And we will also explore the new UK's Data Use and Access Act. Welcome to the podcast, Husna.

Husna: Thanks, Robert. It's great to be here.

Robert: Let's start with that broad area, but also complex one about consent and the Eightfold case in the US has firmly put consent back on the headlines and what that means as much in the UK as it might do in other jurisdictions. But from a privacy and legal perspective, how should TA leaders think about consent and the lawful basis for collecting candidate data?

Husna: Eightfold's an interesting one because it highlights transparency and invisible processing, which exposes why consent perhaps isn't the best lawful basis to use in recruitment.

Robert: Right, and divisible processing.

Husna: Invisible.

Robert: Invisible processing, right, of course. Yes, exactly. You can't tell what's going on.

Husna: So if candidates don't know that data is being collected about them or used in this way, they can't consent. And if they don't know that profiles are being created, scored, enriched, or used to reject them as part of a process, there's no consent there.

Robert: That's very interesting. Probably what a lot of people haven't thought about or considered in some way. Because I, I suppose like many people, we just assume that data is out there about us. And if it's out in the public domain, then we have supposedly given some consent to it, but maybe not.

Husna: That's true. Well, the issue in Eightfold is the allegation is that applicants weren't aware that data was being collected about them and used to compile AI-generated reports and profiles that were then used to screen them out of roles that they had applied for. And they didn't have an opportunity to look at those reports. They didn't have an opportunity to challenge any of the information in those reports.

Robert: And that becomes an important aspect of all of this. somebody's going to use data about me, I want to know that are they using it in the right way? Is it representative? Because that really is then part of the consent if it's going to be used as a decision making process.

Husna: Exactly its inferring characteristics about you, as was the case with eight folder AI. Although it's a US class action lawsuit, we can take some points across to the UK. And really it highlights two things for me. One, the importance of transparency and fairness in recruitment, which we've talked about quite a lot before. And two, that consent really is a problem in the recruitment process because you can rarely have genuine and informed consent.

Robert: Oh, okay. Well, let me just understand that bit because as you said, there's two things around that, but I want to just… double down on the, that bit of consent on that and just understand that bit a bit more.

Husna: So if you think about a recruitment relationship, there's an inherent imbalance of power between the candidate and the recruiter because candidates often feel that they have no other choice but to consent to the processing.

Robert: Okay, and from a legal perspective, that can be an imbalance and that can change whether it's legitimate consent then?

Husna: Exactly, because for consent to be legitimate it needs to be freely given. So arguably in that scenario consent can't ever be freely given because you feel a pressure to give that consent to make sure that your role, your application will be considered.

Robert: Right.

Husna: So the question perhaps isn't how do I get consent? Instead teams should be asking is there another lawful basis that I might use that's more appropriate and go down that path and also how can we be transparent and fair in this process?

Robert: Yes, and okay, and because one must feed in to the other. So if we're going to have got consent in the right way, then we have to know that it was transparent. People know what they're consenting to and that the imbalance of power has been addressed a bit. So does that… Yeah, so what does that then mean in the recruitment process? is there a different type of consent we need to be looking for now than from what we have been in the past?

Husna: Well, under the UK GDPR, there are different lawful basis for processing.

Robert: Okay.

Husna: And legitimate interest is perhaps more appropriate for typical recruitment purposes. And to use legitimate interest, there needs to be a legitimate business need, which arguably there is you have a need to consider an application.

Robert: Yes.

Husna: for someone to uh get a job in your company. But that doesn't mean it automatically applies. There are steps that need to be taken to demonstrate you have a legitimate interest.

Robert: Right. So you don't have to get consent in that case because you have a legitimate requirement to be able to collect that information.

Husna: But there are some additional steps that need to be taken. It's not a simple fix. You need to carry out what's called a legitimate interest assessment, which is essentially a three step process.

So you might do this in a document, there are lots of creative ways to do this, but essentially you are looking at what's the purpose of the processing? Is it necessary to use the candidate's data in this way? And is it fair to the individual? And then that fairness piece is built out into a balancing exercise because you need to be able to demonstrate that your legitimate business interests outweigh the candidate's privacy rights. So there are various factors that need to be considered through this assessment in order to come to the conclusion as to whether or not you can rely on legitimate interest. can rely on that.

Because if you can't demonstrate that your interests outweigh the individuals, you can't rely on it. But the things you might consider are what are the reasonable expectations of the individual?. Is there any invisible processing? Is there any special category data being processed? What is the business need and what safeguards are in place?

Robert: Okay, that's really interesting. And so if you… go down that route because it may be easier of legitimate uh interests in all of this. You have to consider those factors. And you were saying, that you’ve got to be able to document those things. Is that right? Because if you get challenged or, you know, what's the... Do you just have to go through a process and put it in a drawer and then if anybody asks you, it comes out? Or is this something that you have to be...

Husna: So this is, it's important to have this because where you rely on legitimate interests, data subjects have a right to object to that use. It's not an absolute right.

Robert: So whereas some of the lawful bases are absolute rights, because they, because I'm seeing consent, you can consent, I've given my rights away, whereas this is a different…

Husna: You have the right to withdraw. But in this case, it's, guess it's a bit more conceptual. You've gone through a process and there are arguments on both sides. So a candidate could say, actually I don't agree with your argument that you have a legitimate interest and I'm objecting to that and there is a process that then needs to be followed.

But if you have a legitimate interest assessment recording the steps you've taken to consider those points that I mentioned before, you can use that as evidence to say we've thought about this carefully and we still believe on balance that we do have a legitimate interest because nothing has changed from when we did our initial considerations. But you do need to revisit this. If something changes in the processing, if it becomes more high risk, you would need to revisit that and consider whether the balance is still tipping in your favour.

Robert: Fantastic. That's very, very helpful. And that's just the starting point in all of this too… whether we've decided and we can come back later on as to how teams might work with their legal and compliance colleagues to establish what's the right way to go on this.

But once we've got that bit of how are going to do this, whether this through consent or uh legitimate interest, then comes other things to consider too. And partly we've discussed this too around, again, when you talk about transparency, there comes a, how are you going to use that data as well. And I'd be interested in whether that has an impact on legitimate interest is how you're then using that data because we fall then into UK GDPR Article 22 of automated decision execution and automated decision making.

Perhaps you can share a bit, is it important to understand the difference between those two things?

Husna: It is. If we're just looking in the world of the UK GDPR at the moment, it's very important because there are strict rules around solely automated decisions which have a legal or similarly significant effect. I'm taking the wording directly from the GDPR there. And it's generally restricted other than in fairly narrow circumstances, there are fairly narrow lawful bases that you can rely on for that.

Robert: And assuming legitimate interest is…

Husna: it's not one of those under the UK GDPR. Yes, it is. You can you can perform those types of decisions provided it is necessary for performance of a contract. You have the explicit consent of the individual or it's specifically authorised by law. We've already talked about the challenges with consent and we've been talking about explicit consent then.

Robert: So it's whole other area. high, yes.

Husna: And then performance of a contract is a tricky one because we're talking about the hiring process before a hiring decision has been made. So it is a high bar to achieve. So it feels like the environment for automated decision making is fairly restrictive.

Robert: It's really tough.

Husna: So when we talk about automated decision making, this is where the m system is making the decision its determining the outcome. So automatically rejecting candidates would be a good example of that.

Robert: Which you could do through... and this is why I think it's really interesting to understand that because you could automatically reject a candidate based on their educational attainment to one degree or two twos. Lots of people that have set a criteria around that. So does that fall into… solely automated decision making?

Husna: I think we have to look at the level of human involvement and this is where we look at what automated decision execution is. And what I call automated decision execution is where the system supports the process, the automation supports the process, but a human is still very much deciding the logic and is involved in the decision making process. A good example of this might be human sets the criteria and scoring logic

Robert: right, like a two or

Husna: yes, exactly. So they might decide what skills are important for the role that we're recruiting for. How do we weight different factors? And sort of skills in that in that area? What does good and poor look like? And then the system executes human defined rules. And they do that consistently across a large volume of candidates.

Robert: And you've got to be able to demonstrate that that's fair.

Husna: Exactly. But in this case, the technology is not making the decision about whether or not you get through to the next

Robert: The humans have set the criteria and the system is merely just executing on...

Husna: It's executing human judgment at scale. The humans can still come in and they can view the outcomes. They can override scores or they could change the criteria. I think if you can demonstrate that… then you are being, you're able to demonstrate meaningful human involvement, which is the key when it comes to automated decision execution.

You need that meaningful human involvement to show it was a human that defined this logic and set this criteria. The system is merely supporting us by executing this so that we can scale our hiring platform and hopefully get more candidates applying for these roles and being able to sift through those applications more

Robert: efficiently.

Husna: More efficiently, exactly. Yeah, yeah.

Robert: Now, and that's very helpful because I think, you know, as a lay person on this, you kind of read the word meaningful human involvement and you go, oh, goodness, does that mean a human has to look at every single data point that comes in and oversee that? Or actually, are there other ways that you can demonstrate meaningful human involvement? And it's been very helpful to hear you describe that.

So does the new Data Usage and Access Act change some of the guardrails around this and some of the elements that you've just alluded to have been quite challenging under UK GDPR and Article 22?

Husna: Yes, it creates a framework where this is more workable in practice. It's essentially expanding the current legal framework that we have. And what it does is it introduces legitimate interests as a lawful basis that you could rely on for automated decision making. And when I say that, I mean the solely automated decision making that is sort of stricter under the UK GDPR.

However, again, it's not a blanket basis that you can rely on. You need to have some specific safeguards in place, which I can talk about in a second. And there mustn't be any special category data being processed.

Robert: And there mustn't be any special category.

Husna: So you can't rely on legitimate interest if special category data is being processed as automated decision making.

Robert: Ah, within that.

Husna: So it doesn't mean you can't collect special category data as part of your hiring system. It's just where the automation is occurring and the decisions are being made that can't process special category data.

Robert: Fine, fine, fine, fine. That doesn't in the recruitment world, that doesn't alter our ability to collect, which many organisations do, data about neurodiverse. requirements to enable them to apply reasonable accommodations. That's not impacted by safety.

Robert: No. But what you would want to think about is the safeguards that you might put in place around that type of data. Because it's higher risk and it doesn't need to be part of the decision workflow. So perhaps separating it, keeping it separate from the automated system itself so that it doesn't inadvertently flow into the decision making process.

Robert: Right. Oh, that's very interesting. Because understanding the law now becomes an important part of designing your process and the way that you are going to store and process data too. So just going back to that special category data bit then, because as I said, lot of organisations do collect that. They should be thinking about how they might actually store that separately rather than collecting it all in one go, holding it all in one place.

Because part of the challenge, you you can be transparent around all of this, but you need to have auditability as well. And so the data needs to be easily accessed. But these are some of the things that people should be considering. Are there any others from a sort of practical point of view that you think that, you know, when we...come to design the recruitment process that we might and should be thinking about?

Husna: Yes, and the design stage is the perfect place to start thinking about that because you can really implement privacy by design.

Robert: Privacy by design. I like the phrase. You're building,

Husna: Well, it's set out in legislation and very commonly used, so I can't take that for myself, unfortunately. But it's really important. It's a really important part of governance and demonstrating accountability and that you have considered risk from the outset.

So it's a great tool and one part of that actually is a data protection impact assessment. So as you're starting your design process, you should also be considering doing a data protection impact assessment, which is your risk assessment for a high risk processing project, which this will be. This is very much. data is sensitive. It has a high impact on individuals. It impacts people's livelihoods as to whether or not they get through. And it's not surprising that we tend to see more challenges and complaints and legal disputes in this area. So it's really important to get that right from the design stage.

Robert: And just on that interesting point about the DPIA is the data privacy impact assessment. I think a lot of up till now, probably a lot of people in the recruitment space have thought, oh, this is just another document I've got to fill in and oh, can I just fill in a couple of questions and then tick the box?

But actually, think from what you're saying, this is a really important document and it's one that you need to properly think about and act upon because somebody at some point may well be coming back to you saying, can I see your DPIA and whether you have properly thought this through?

Husna: Yes, I see the data protection impact assessment as your source of truth. It's going to be your key document in this whole risk assessment process. It is really useful at demonstrating good governance to regulators, partners, to candidates, data subjects. It also acts as an audit trail because it sets out all of the risks that you've identified with this project and the mitigating strategies that you've put in place to reduce those risks to an acceptable standard, because really you can't progress the project unless you've mitigated those risks appropriately. It also forces accountability.

It is a living document that should be used by the whole organisation. Anyone that touches your recruitment system needs to be feeding into this.

Robert: Oh really? So it's not just talent acquisition? It could be the people, the hiring managers who are doing some of the interviews on this. Could be some of the IT people if they're looking at their managing an applicant tracking system.

Husna: Yes. Yes. Because it should be used to map your hiring workflow, and there will be lots of different points at which AI and automation might come into that workflow and different teams are involved with different parts of the system. You've got the tech side and the product side and then you will have the legal and compliance side and then the teams who are actually using the system.

So all of that needs to be fed into this document because you can't successfully identify risks unless everyone is talking about how they plan to use it.

Robert: And just on that note, who should be responsible and accountable for that document? Is that with legal and compliance or if it is about the process and mitigating risks in the process, is it actually a document that the head of TA should be owning and accountable for?

Husna: Yes, I always think the project owner should be accountable for the document. Legal and compliance will have a big part to play. The privacy, if you have a data protection officer, they will need to review and sign off on the risk and then it will need to also be seen by management depending on the nature of the risk. But ultimately the project owner is the person that knows how the system is going to be integrated and used. So they are responsible for owning that document and revisiting it every time a feature changes, there's a different use case. And also it needs to be regularly reviewed. So there should be version controls, regular review everything goes into that document.

Robert: Wow. So it's almost like information security because that's we see and we have that same kind of governance process around information security. But I've never seen or heard anybody taking the same level of oversight and seriousness to a DPI. But from what you're saying, if you want to stay on the right side of the law and all of this and you want to be able to sleep well at night knowing that if there ever is an issue that comes up, then you are on top of it.

And I hadn't thought about that it's a living document too. So because things will be changing all the time in the same way with information security, they're changing all the time. So you need a mechanism where you're going back in and going, what's changed? Are we happy with that change? What's the impact of that change?

Husna: Exactly, and then it will help you spot risks. It also means if you invest the time upfront, when those feature changes come in, it shouldn't take too long to review the document and consider specific risks or look at the existing mitigation strategies you've got and determine actually that's sufficient to protect against this risk. It's not a huge change. It doesn't impact the individual.

Robert: And does the DPIA help you with other regulations too? So there will be quite a few organisations in the UK that have European operations. Is that type of document going be helpful for the new EU AI Act that's coming into enforcement, I say enforcement, coming into law over an application over the next few years? Or is that a whole separate requirement that people might need to think about too?

Husna: DPIA is a requirement under the UK and EU GDPR, but it certainly could be used to record AI related risks. There will be risk assessments required under the EU AI Act. So I imagine if you don't want to have multiple risk assessments floating around, you could combine them under one document. It's not rigid. And it's also used, for example, in the UK to look at other potential risks that might come up under other laws.

So for example, I've been advising a client recently and we've been using the DPIA to look at online safety act risks as well as to whether certain elements of that are triggered because there are privacy aspects to consider there. And em there are children's data aspects to look at. So you look at the considerations from the ICO's age appropriate online code of practice. So you can pull in different bits of legislation and consider them there.

Ultimately, it's looking at what are the risks to the individual from a privacy perspective, but it's not, it doesn't have to be just under the GDPR, but it's a mechanism that is triggered under the GDPR.

 

Robert: Such an important point to think about, because it just shows you how broad this could go. you know, where you got me thinking there was the interesting element around the Eightfold AI case is that that was coming under consumer protection rather than the things that most people thought, which was the AI laws and regulations. And so that DPIA does help you start thinking a bit broader than you might do otherwise. Just while I think about it, what's the definition of a child or an adult? uh Is it age? Because I was just thinking that we have apprentices that are coming in. a lot of people recruit for us and they could be 16, 17. Does that open up to a child or?

Husna: It depends on which bit of legislation you're looking at.

Robert: Right.

Husna: But from a data protection perspective in the UK, the ICO considers a child to be anyone under the age of 18. So whilst the age of consent might be 16, I think it is in the UK. And there are certain tools that are available to children over the age of 13 from a data protection perspective, children will be under 18.

And so what that means is you need to think about building your products and services with that demographic in mind and having age appropriate language and designs and avoiding the more intrusive uh profiling techniques like dark patterns and all of those sorts of things when you're building platforms. So there's just some additional risk related guidance that you need to approach and consider when you are designing systems that might be open to children.

Robert: Fascinating. It just shows you what a, I say a minefield, but how broad it is that you need to think about things here. And so what's the type of advice then that you'd be giving to a TA leader on how to think about how they go about this and how they work with their legal and compliance colleagues on all this because it feels very complex.

You've alluded to the fact that they actually, as the project owner, need to own this, but they're not legal and compliance experts. And so, yeah, how should they think about this and best prepare when they're, you know, coming to people like you to then give them advice?

Husna: I find it really helpful when people come to me with a clear use case and a workflow. So they've really thought about what is this system or tool that I'm going to ask legal and privacy about? What data does it use and where does that data come from? Is there any AI or automation? Whether the data leaves the UK, because that's quite an important one. We can talk about that in second. Who's the vendor? Do they want to use the data for anything? Are they going to use it to train their own models? Because that's quite an important one that people don't often think about.

And whether there's any additional documentation that you can get from the vendor, do they have any security information? If it is using automation or AI, have they done any testing or auditing, bias monitoring, and can they give you reports to back that up? And have you done any other due diligence on them? If you can get that sort of information, perhaps with some diagrams, I'm a very visual learner. I always find it really helpful to have a user data flow. So I can see where data is going in and out of the product. That's great.

Then it's going to be a much more collaborative partnership and it really helps legal and privacy folks look at something and spot the key issues as quickly as possible. And also a mistake people often make is coming to legal and privacy too late. It's really not helpful to come and say, we've signed up to this tool and someone's mentioned that we probably need to talk to you about this. That's not helpful because we then hold things up, asking lots of questions, having to go through this whole risk assessment process that we've just talked about. And then it makes you look like you're a blocker when actually we want to be an enabler. Come as soon as you can with as much information as possible and then you can build something together that works.

Robert: Yes. And I think that's… so important about having enough time to think this through because, and correct me if I've got this wrong terms of differentiation between the Workday Eightfold AI cases and where we might be in the UK and Europe because it's easy to look at those and go, ah, well, we've got to do all this impact assessment stuff but ultimately it's going to be the vendor that's responsible for us, for all of this if something is out of line. Whereas it seems under UK GDPR and European GDPR, actually, no, you the recruiting organisation are going to be responsible. You can't hide behind a vendor.

Husna: No, you can't defend your system if you don't have the information about what it's doing. If you can't explain how decisions are being made, you might use a vendor to enable you to perform those decisions and have those processes and make your team more efficient.

But ultimately, you're a data controller and you have a responsibility for the data that you're processing and you can't outsource the compliance element to a vendor. And actually vendors are often the highest risk area. a hidden risk area. If TA teams don't know what those vendors are doing with the data and if vendors themselves can't explain what's happening, it's always a good sign. They can't tell you how decisions are being made or how their software is working is

Robert: A bit of a flag.

Husna: Yes. Yes.

Robert: So that's clearly one area is when you're asking the vendors there of, you know, can they explain all these elements of, you know, how they're collecting the data, how they're using the data. made a good point about, are they using it for training or not? But you also made a comment I just want to pick up on, because I think it's highly relevant of where the data is stored, because does that change some of the legal elements around that, whether it's UK or Europe or outside of those domains?

Husna: Yes, it just means that you need to think about whether there are restricted transfers. What that means is the data is leaving the UK and going to a jurisdiction that is not adequate, hasn't been listed as adequate by the UK or the European Commission. I'm thinking back into EU world we have adequacy in the UK as well.

Essentially an adequate country or where there are an adequacy decision to govern the transfer of that data. So those countries are called third countries and you have additional rules that you have to follow to lawfully transfer the data to those countries. And there are mechanisms that can be used. Standard contractual clauses in contracts will be one of them. There is a UK addendum to the European Commission standard contractual clauses. There's currently a data bridge with the US.

Robert: I was going to say, does the US count?

Husna: It does. It has a data bridge. The US is particularly tricky because sometimes there's a mechanism in place and sometimes there isn't, depending on challenges to those frameworks. The issue being US surveillance laws are fairly far reaching and it's difficult to uh accept that the US government could never access the data if it goes to the US due to those surveillance laws. there are various, I'm not going to go into too much detail here because it's fairly comprehensive, but there are various mechanisms in the data bridge arrangements that attempt to protect European and UK data from those laws.

But I think there will be there's likely to be continued repeated legal challenges against those mechanisms. So they tend to fall away and then you have to look at standard contractual clauses or other mechanisms, but there aren't really very many. And then you also have to do what's called a transfer impact assessment, another impact assessment, which is essentially determining that the laws of the country that you're sending the data to are sufficient and of a similar standard to UK data protection law.

Robert: Gosh, that's a lot for a TA leader to be thinking about and coming to you with a sort of list of things around all of this. It's going to be, first of all, how does the process work? Then it's going to be where is the data sitting? Then it's going to be who are the vendors we're using and what kind of protections have they got in place? Um, then it's going to be around the transparency of how it all works. And then at the end of all of that, you and a position, I mean, I imagine it's quite a sort of checklist of things that are in there and be interesting if you know, if you've got anything around that, but the, at the end of that, you've, you've got a sort of list of information from which you can then say,

Right, this is where I think you can get consent. Here's some of the things, because it may involve updating data privacy policies. There may be, I think people kind of think about it as, we'll just come to the legal compliance team and we'll tick a box on this one, as opposed to, we'll actually might identify things that then need to be corrected. So when do they do the data protection impact assessment? Is that before they come to you or after they've had a little bit of feedback?

Husna: It's very much a collaboration. I always think it's worth looping privacy in as soon as possible to the discussion. when you're thinking about a tool before you've even done an RFP if you have a procurement process. Then these are the sorts of things we're looking from for a tool and this is what we want it to do because it's going to help us with X. That's really helpful because then I find it helpful from a design perspective to say, we didn't really dig into that, but the things you then want to think about is, okay, you're going to do this and how are you going to explain it to the candidates? Because this is where we can build transparency into the system. And this is where the privacy notices will need to be updated.

But there might also be other things that could be used to improve transparency, like design features around just-in-time notices and pop-ups at the time that particularly sensitive information is being asked for or… you know, some processing's happening and it might be useful to explain why you're asking for that data at that time.

Robert: Fascinating. So that transparency piece, really important part of all of this, because you could go through an RFP, design a process, then they come along to you and you go, okay, but I don't see how the candidate is being enabled to exercise their rights in all of this. And then suddenly the whole thing falls apart.

Husna: Yes, because when you're working with automated decisions to an AI, it's even more important to be able to explain that. And that is why the DPIA is useful, because it helps you to map out what's happening in the system. At the start of the process, you're going to document this is what's happening. This is the user journey. This is where all the processing is taking place. You can take that and think about how to explain that in a plain English way to someone who doesn't know anything about automation and it helps you explain the impacts of those decisions on them too.

So transparency is a key piece to all of this. If you can't explain what you're doing, you can't defend the systems, you can't enable the candidates to make informed choices about their user journey, and you can't explain how they can also access their own rights. There might be a right to request human review because there is a bit of additional automation in it which has a… significant impact on the individual. So they will have a right to request a human review, but they won't know about that if you haven't been transparent and they won't understand why that's important. So there's so many elements to this.

And I often find being in privacy, drafting is so key and you have to do a lot of writing and thinking about does this make sense to someone who doesn't know anything about data protection because that's what you're trying to achieve.

Robert: And I can understand that, you know, must be an incredibly hard thing to be able to do well, is why we need experts like you. But just on that transparency piece, because there's a lot of feature updates that are happening at the moment in the systems around volume recruitment. People are introducing AI tools in their new feature that wouldn't necessarily appear on a… data privacy impact assessment because it just seemed, well, we've had this system in place for some time.

So things like, and I'll give you an example on where would you sit on this, which is, oh, I can help you talent acquisition really improve the amount of time that you spend trying to find candidates because… I will go, my system will go out, it'll go and scrape and find all sorts of bits of data out there. And this is a little bit, I suppose, what the Eightfold one is about, but I'll go and finally, and I will match it to the job description that I that you have put in there. And if when you say, oh, that's brilliant, how do you do that? And they say, oh, well, it's done through machine learning and AI that probably doesn't fit the transparency requirement does it?

Husna: It's difficult I mean it raises alarm bells already with you explaining that scenario to me and it makes me think of all the potential risks involved but from a transparency perspective you're absolutely right how are those data subjects going to know that their data is being scraped from wherever it's being scraped?

I think there would be a lot of questions in that process and what assurances are you getting from your third party vendor here, that's what they are doing is lawful and fair. Because really they need to be able to justify to you why they believe that complies with the UK GDPR to start with. And then you can take that analysis and consider it in the context of your own platform.

Robert: That’s really interesting, yes.

Husna: I feel this tips into the whole invisible processing we were talking about before. And also, are you discriminating or disadvantaging any candidates through this process?

Robert: How would you know?

Husna: How would you know? Exactly. And I guess what's the purpose for you? How are you going to use the data that they bring in? Is this your only sourcing workflow or is this just to enable or enhance the existing sourcing workflow? You don't want to miss out on potential candidates

Robert: because the word exactly the AI didn't find the right word.

Husna: It made some decisions that weren't necessarily the best decisions or they were biased and you didn't have a chance to review the criteria that it was using to select candidates. There are so many elements to that but ultimately it is invisible processing and it might not be fair or lawful.

How were they scraping? What are the platforms they're going to? There are some platforms that have terms and conditions that say you cannot scrape our platform. You can't use bots to take data. Just because something's publicly available also doesn't mean it's there to be taken. It's still personal data. The rules still apply. So there are so many elements to the scenarios you suggested to dig through. And we haven't even got into the ATS at this point.

Robert: No, well, exactly. That's right. And eah, how that all works. But I think the point about what you're sharing there is that you've got to be really clear about the process and how data flows within that. And then these guardrails that have to be in place if you as the recruiter are not going to be liable for a breach which everybody wants to.

Husna: Well if you're the data controller you're ultimately responsible for the processing of the data regardless of what the system's doing.

Robert: So we've covered a lot of things in this discussion, Husna, on this one. We've gone through consent versus legitimate interest. We've talked about the importance of a data privacy impact assessment and doing that upfront. We've talked about transparency, fairness, and there are so many elements here that can be quite overwhelming for a TA leader thinking, wow, have I just poked a hornet's nest here that I would rather leave in a corner and not worry about. If you've got any sort practical takeaways? What would you advise for somebody who's thinking, okay, I get what I just heard here and some things I need to think about? What would you advise them to go and do next week or the week after?

Husna: I would suggest there's probably a few things that you could do. Number one, I would suggest mapping your hiring process. Get a piece of paper draw out what happens all the way from sourcing candidates through to onboarding new hires and identify throughout that process where AI or automation is happening and also maybe identify where tools are involved because you might have different tools for different parts of the process. That will really help you look at what you're currently working with and identify if there are perhaps some conversations to be had with legal and privacy. And you might be fine, but it's a conversation starter.

I then think it's a good idea to speak to your legal team and perhaps agree a short checklist that your teams could fill out before they want to switch on new tools or onboard new tools that they would come to legal with before they do any of that. And it might just ask some of the questions we talked about before. What is the tool going to do? Is there any automated rejection or AI going on in there? Is there any sensitive data that would be processed through this tool? And does the vendor want to use the data for their own purposes? Just a few key questions to try and

Robert: just to set the context, the lay of the land

Husna:and just see how risky it is. It lets the legal team triage it, think, and assess the risk and how long something is going to take and which path needs to go down. And then finally, I think for this system, it would be a really good idea to sit down with the team and really think about what meaningful human oversight looks like within this.

We've talked about how important that is and we've talked about when it might be a safeguard that's legally required and when it might actually just be a sensible thing to have from a transparency perspective. But what does that look like in the system that you've currently got and are there improvements or updates or design tweaks that you could make to make sure that it's meaningful and that candidates are able to access the rights and the objections and the challenges that they might be entitled to.

That's what I would suggest over the next few weeks. Nothing groundbreaking, but

Robert: Just getting on top of it, a good place to start, because it's really helpful, on this one. And I think many people probably feel, before they would have listened to it, this episode and the things that you've shared. That they would probably have thought, oh, I'm largely on top of this. I've done a, you know, DPIA and somebody's filter in box ticked. I'll worry about all the other things that, you know, keep me awake during the day when doing, you know, high volume processing and recruiting.

But actually it sounds as if everybody should go back and have a look at where they are today because the… the laws are changing a bit on this one now. We have a new act that's coming in. But that doesn't necessarily mean that the world is going to get easier on all this. You still have to have all those guardrails in place. You have to demonstrate that you've thought about them. And probably actually quite a lot has changed in the last 12 months of what you thought you had in place and where you are today.

And so doing that refresh audit and looking at where you are might reveal some things and that you'll be glad that you did uncover. Well, Husna it's been fascinating talking to you and it's been so helpful just understanding some of the language around this but also being able to understand some of the practical things that people should be doing. I'm sure that you will have a number of people after this who will be reaching out for your advice and expertise. But I've very much enjoyed this and thank you very much for coming along the podcast.

Husna: Thanks for having me.

____________

Robert: Well, that was a fascinating discussion and I loved the way Husna was so clear about the things that everybody needs to be thinking about and it's a complex area. But when you break it down into the core things that we need to be thinking about, it is something that we must and absolutely need to address and think about and you do it in the right way, and you will have a better and more compliant process.

I think there were four things that I took away from that discussion that were so helpful.

One clearly is to go and do a review of your current process and Husna gave a great sort of checklist of things to go and think about when you're reviewing the current process.

I think the second thing was the importance of the data privacy impact assessment. know, Hussner referred to that as a source of truth, but it really is the linchpin between what you have designed and your compliance and safety as a recruiting organisation, in terms of being on the right side of the law on all of this. And that really changes something that probably most people haven't thought of as a lynchpin up to now.

I think the third thing was after you've done all of those things, it enables you then to establish what your legitimacy is to collect the information and the area that Husna highlighted for me around this was that yes, we can have legitimate interest, but that's not an absolute right. And there are other things that need to be put in place, such as uh an auditable trail transparency that either give us that legitimacy or undermine it.

And then I think the last piece that I really took away from this was how to apply meaningful human oversight. And that both goes into the process itself and how recruiters and hiring managers are involved in setting the criteria that may have a degree of automation around it. But it also, that human oversight has to be applied to the vendors that you might be using and really making sure that they understand and support you as the data controller in meeting your obligations.

A hugely useful and enjoyable uh podcast session and if you are thinking about how you might address some of those things please do reach out to either me or Husna via LinkedIn. I'm sure she'd be delighted to help you with the checklist if you want some ideas around that.

And then please remember, whether you're listening to this on Apple Podcasts or Spotify or watching on YouTube, be sure to like and subscribe to the podcast, especially if you found this episode helpful.

If you do, it supports and helps more TA disruptors like you find us and the great guests that we have on this podcast. Thank you.