There’s no shortage of vendors promising AI-powered hiring magic. Faster shortlists. Bias-free decisions. Instant job matches. All backed by ‘cutting-edge’ algorithms and regulatory-sounding claims.
But how do you tell the difference between true innovation and well-packaged snake oil?
In this episode of TA Disruptors, Arctic Shores Co-Founder Robert Newry sits down with Dr Alan Bourne, Partner at Ommati and founder of Sova Assessment, to cut through the noise.
Alan has spent two decades building and evaluating selection systems in both the public and private sectors — and now advises organisations and governments on AI adoption in hiring.
This isn’t a conversation about AI hype. It’s a practical, clear-eyed discussion on how TA leaders can bring AI into their process without compromising fairness, compliance, or candidate trust.
Join Robert and Alan as they discuss:
🧠 How to evaluate AI tools in recruitment without getting blinded by buzzwords. Alan shares the specific questions leaders should ask vendors (and themselves) to cut through the marketing.
⚖️ Why transparency and explainability matter, especially under the EU AI Act. You’ll learn what’s actually required from a compliance perspective and why some ‘automated’ tools may carry more risk than reward.
🔍 Where automation adds value — and where it undermines good hiring. Not all parts of the process should be automated. Alan breaks down which stages benefit from AI, and which ones must stay human-led.
🚩 How to spot the red flags in AI hiring products. From spurious claims to missing documentation, Alan reveals the signals that should make any TA leader pause.
If you’re tired of vague promises and want to lead with clarity — this one’s for you.
Listen now 👇
Transcript:
Robert: Welcome to the TA Disruptors podcast. I'm Robert Newry, Co-Founder and Chief Explorer at Arctic Shores, the task-based psychometric assessment company that helps organizations uncover potential and see more in people. And this is our fourth series of the podcast and we are going to be looking at AI and its disruptive impact on recruitment and selection and talking to a number of TA leaders about how they are dealing with this huge technological shift and transformation that AI is bringing about.
And today I'm very excited to be welcoming an old friend of mine, Alan Bourne. Al is a partner at Ommati, leading their consulting and advisory services, helping organisations build market-leading talent assessments and development capabilities. But Al, you and I first met when you were leading Talent Q when it was part of Hays and just before you went to go and set up Sova Assessment. And in the last 10 years or so, we've both had a lot of fun challenging the status quo in the assessment sector.
And when you set up Sova Assessment, you were certainly thinking about how the world of assessment needs to change. And you built up a, without doubt, a very successful and leading assessment company that was international in its reach and impact. And so you are somebody who has worked with a lot of leading organisations implementing end-to-end assessment programs, rethinking how we measure and what we measure.
And as somebody who has a PhD in organisational psychology and a chartered occupational psychologist, there are a few people that have got the depth and expertise that you bring to this space. So welcome to the podcast.
Alan: Thanks for having me and thanks for the very kind introduction as well.
Robert: Not at all. Well, I've been looking forward to having this discussion. You've been quite focal on LinkedIn and about the impact and disruption that's going to come from AI. So let's start with the impact of the AI enabled candidate. And some people think that this is a major development.
Others like our old friend Rob Brunner who posted on LinkedIn the other day saying there seems to be a lot of hype around AI and perhaps too much attention and should we really care about the development of AI and is it as big as some people are claiming it to be? So what's your take on the AI enabled candidate? Is this a big problem or is this something that people are overplaying?
Alan: Yeah, sure. I mean, I think as we get into it, there's a whole piece of opportunity which we'll come on to later. But in terms of the way candidates are using AI, it's easy to kind of say, this is a bit like when we move from paper and pencil to the internet. There are some corollaries to that actually, which is true to..
Robert: Because everybody worried about that and said this could be a major problem and it wasn't.
Alan: Yeah. And I think the reason that was not such a drama is that whilst you could get in another person to do your test for you at home it's practically quite hard to do that. Like you've got to get somebody, you might have to pay them. And there is a small industry that was doing that, but it wasn't ubiquitous, right? Now everybody has got out their fingertips, a range of language models that are pretty good at some of the questions that we give them in this sort of assessment.
Robert: So you don't even have go out and find somebody, don't even have to pay anybody really.
Alan: Yeah, it's even free, right? I mean, you just go on the website of ChatGPT or whatever, and you can use some of those tools. So the fact that everybody's got it at their fingertips, that's the bit that's massively different, I think, in terms of just the practical scope of it.
Robert: Yes. So you've got it that everybody's got it at their fingertips, but that doesn't necessarily mean that everybody is now using it to cheat and manipulate. And so have you come across any evidence around that?
Alan: Yes. So I mean, had a dinner conversation with a client the other day who was talking about sort of a volume hiring process and what the challenges were for them. Their experience was a lot much higher application numbers..
Robert: Yes. And then, is a study, you know, I mean, we know that the world has changed and that there are more people applying for jobs, but the rapid rise is beyond…
Alan: No, if you look at the if you look at like the Institute for Student Employers data, I think from last year, I believe they their members had said they'd seen a 59 % increase in the number of applications.
Robert: Yes, but that's like,
Alan: Yeah, that's a radically crazy big number. Right, there are some economic conditions that affect that a bit, but it doesn't affect it by that much. So clearly the evidence that then sits underneath that is suggesting, yes, lots of people are trying to do lots of applications. And frankly, why wouldn't you? If you're applying for jobs, you want to get a load of applications out there because it's a very competitive business and people are having to put in the 70 to one ratios or more in some hiring processes. So the chance of getting a job is really hard work.
Robert: It's a huge, it's huge.
Alan: It's a massive task to do it. So you can see why anybody with half an ounce of common sense would at least use it to do their application form. And I don't probably think using it to help draft your application form isn't necessarily in anyway cheating anyway. It's just getting some support to write some good blur. Well, that's right.
Robert: If you're dyslexic or dyspraxic, then...
Alan: Yeah, there's a whole bunch of reasons. So I don't think that's necessarily a problem. The issue is then when you get on to, well, how do we make sure that the assessment bit is fair? And there's a lot of folks who've gone… you know, we're going to have an honesty statement or go, right, we are asking you to not use AI to do these questions when you do this test, whatever. And people click the box and say, yes, I won't use AI. That would be great if it worked.
And the problem is it looks like, you know, anywhere, think Bright Network had a number of like 8 % of people that specifically admitted to using AI to do tests.
Robert: Which is an unusual thing. You probably that underplays it.
Alan: But yeah, this is people who said they've done it, right? There was a Capterra survey that had 28% of people in a survey and I think it was quite a good market research methodology in the survey, who had said they had at some point used AI to answer questions in a test. Whether they conceptualised that as cheating or not was different question, people are doing it. You might reasonably go 15 % or so, whatever the number in the middle is. A sizable minority of people are doing it.
So that's your issue and most people aren't. You've got a big disparity there, right? And you as you know, I've written about this, you've got this leapfrogging problem that 15%, let's say at the beginning of the process, it could be quite a big issue if at the end of the process, a lot of those people have leapfrogged to the front, they might be a third, could even be half of the people that you're hiring, right?
And that's quite… And we don't know. And the average score isn't going to necessarily belie that because there's loads of applications. There might be lower scores over here, some really high scores here. We haven't really got a forensic approach to get inside what's really going on.
Robert: Yes. I just want to tease that out because that was quite a big revelation when I read that article. Let's say it is, which is a reasonable hypothesis about 15 % of people are genuinely using it. It's a high stakes environment. Know that people are using it in exams to help them get better exam results. So why wouldn't they use it in recruitment? So 15 % at the beginning of the process are using it to help give them an unfair advantage.
You said that by the end of the process, that could mean up to 30 % of the people that you're giving job offers to have got that by actually manipulating their results. how does that number jump up from 50? Why wouldn't it just be 15?
Alan: So let's say you've got your distribution of people doing your assessment and it's reasonably normally distributed. Most of people around the middle, there's a bunch of people who very low scoring, a bunch who very high. That's your normal distribution curve. Exactly. If a subgroup of that, of all getting that advantage in maybe, let's just say they get a 10 to 15 percentage point advantage or equivalent. They're going to jump forward quite a bit against that distribution. So if your cutoff is like at the 40th percentile, let's say, or even at the higher. You've got this clump of people who are all up front here.
Robert: So they suddenly become a more disproportionate because they've got that leg So that can be happening.
Alan: And then if you've got this massive, overwhelming number of applications, we don't, even if the average score stayed exactly the same as last year, you've got, let's say, 59 % more applications. Some of those applications might be terrible. So it could be that you'd expect that if not withstanding this problem, the quality of application has gone down on the one hand, but this group has gone up on the other. So the average score doesn't tell us anything.
Robert: Right, it evens out.
Alan: There isn't any sort scientific method going on to say what can we see in these numbers that tells us they've done it. So we don't know. And that's the big issue.
Robert: I think. Yes, and I think that was the kind of revelation, because that also came out in the interview I did with these university running professors, where they'd inserted 15 % AI students into the course there. because the results, everybody was given a leg up on that, but the results were normally distributed. So it just meant that if you got a third, you'd get a second, or two two and a two two and a two one, that they didn't see a huge change in the average that was going on.
And I think that was the big revelation on this because we've seen lots of data coming out there saying, well, the average scores aren't going up, so there isn't clearly any cheating going on. Whereas everybody knows it's going on. And you have this challenge of, when I've spoken to a few students on this, sometimes I feel I wonder if I'm a mug when I know that my roommate has said that they've used AI to cheat and they've got through to the interview stage and I'm not even getting through to the first round.
Alan: If we look at how these things work. Rules are generally quite normative, think, culturally. So if a reasonably large number of people are doing something and it's basically not seen as a problem, if your mate has sort of got a job in the first couple of weeks, you're still plugging away on application number 172, there is a bit where you're going to go, why am I doing this? I would have thought that's rational. This might not be positive, but it's a rational place to end up, I think. So you'd expect the number of people who are using these tools to do these some of the traditional assessments to increase.
Robert: It's just going to grow.
Alan: I would imagine. Yes.
Robert: And actually there are, I think the other big difference between where we are today and where we were before is that the apps and the tools that are available to people are constantly escalating and growing in how, and how sophisticated they are in enabling you to get an advantage there will be that gray area between well, what's an advantage and what's actually cheating.
And so what's the, so there is a bit, there's a bigger problem there than people. And clearly if 30% of the cohort potentially that you're giving offers to, as opposed to 10% having got there without necessarily the capabilities that you thought. This is going to be, this is actually quite a big problem.
Alan: is not a... look, and if you think about why does that matter, right? There's a moral question. Yes. integrity question. Yes. Which is probably a quite important cultural consideration of what do we want to be sort of promulgating a culture where people that have, you know, pull a fast one get to the front quickly. That happens in real life as well to some degree, but there's an element of like, do you want to really encourage that? So there's a moral… question there.
There's also a validity question, a quality of hire question. Have we just hired a bunch of people who actually aren't as good as all these other guys at the top? So we've made some bad decisions because we've allowed this to happen. It's equivalent to if one particular cycling team is all performance and hard and the other cycling teams aren't, do they deserve to win? I'll leave it at that. That's an extreme view, but you understand what I mean are we hiring the best people after we've spent all of this money and effort trying to source them and hire them so we could lose validity? But it takes quite a long time to find that out. you want to look for these early indicators and course.
So that's a big issue. Are we getting the hires wrong? And then the third point is the fairness of it. is there various types of discrimination possibly going on, depending on who's doing it? I don't know the answer to that, but that would be a… a worry as well.
Robert: Yes, no really important points. And so what's the answer to this, Al, then? Is the answer to this that we use AI tools to detect whether AI is being used or any kind of tool to detect it's going on? Or do we have to go back to more in-person or is there proctoring? know, what are the answers?
Alan: but I think there's probably three or four ways you could approach it. One of them is, we're going to get everybody to be in person. If we've already got nearly 60 % more applicants, then that number is going to grow. In person was already not commercial. Good luck getting that past finance director would be my suggestion. So a bit of in-person activity at the end of processes is probably quite, which already happens by the way, is probably quite valuable for the final selection piece in the human part. But I don't think scaling that is in any way realistic. yeah, I just think funding that's just too hard. So it doesn't feel very viable. Obviously you could make it secure.
Secondly, there's proctoring where you're sort of videoing people doing assessments and there are multiple layers you can do of that. There's also a whole arms race of how you can get past it. Right you know, there are apps where you can sort of appear to be on a school team when you're actually making a cup of tea.
Robert: Yeah, it's one of my favourites.
Alan: So you've got to be relatively advanced to get to that point. But the point I'm driving at is you can still have cheating going on in different descriptions and it becomes a bit of an arms race to do it. So I think proctoring probably does have a role, if nothing else, to make everybody feel happier about it. But again, the discussion that I mentioned earlier, that particular client was very uncomfortable about proctoring because it feels really invasive and quite distrusting of the candidate. I think there's probably these things can evolve and people get comfortable, but I think that's an interesting consideration. The candidate experience side is not quite as straight to minute. So that feels, that's not creating an atmosphere of trust in a particularly good way, which is tricky.
So I think proctoring to defend older types of question perhaps that makes sense. But then you've got, well, what new types of assessment can we create? And that can range from the easy and practical to the really advanced. And I think probably that's where the fun stuff is going to be. Well, hang on a minute. We've been measuring a lot of the same psychological constructs for decades, literally, and not an enormous amount has advanced.
And I think I might have mentioned this one to you before, but even when I was back at TalentCube back in the day, you know, and this is like mid 2012-ish, let's say, we were looking at what are all the innovations in the market, what could be interesting in the partnership we have with HayGroup, what could we do, blah, blah. There were lots of cool things that are now, around now, but you there were some quite good chatbots then, there were some quite good avatar tools then, various other things, but they weren't scalable on the cloud.
Right, so you could run some stuff locally that was quite interesting actually. But it was no practical way scaling it, so the idea that you could make really immersive assessments was quite hard. And you guys have done a lot of that in the gamification space. But scaling all of those more immersive ways of assessing requires quite a lot of firepower from a tech point of view. 12 years ago, I think we weren't there.
I think now the situation is totally different, most of this stuff, I'm not just talking about language models, but I'm talking about all of the visual AI things as well. We're kind of there or nearly there in being able to simulate lots of different situations. And the key point is being able to do things in real time that you wouldn't realistically be able to then go and copy paste into chat GPT to get the answer. And so, rather than just proctoring, even though you do that, you can create much more exciting ways of assessing people, which to me is more important problem is can we make sure it's less boring and more engaging and delivers better insight in all of these things. That's what we should be aspiring to.
Robert: Yeah, I think that's really, really interesting is to seeing this disruption and this development as an opportunity to how do we move things forward rather than sort of King Canute style trying to hold things back and we're going to go back to oversight and it feels quite negative, the proctoring and deterring the detect. If you use it, you'll be, and we discover, you'll be totally thrown out. It's almost going back against all the work we've been doing about trying to make a better candidate experience. And so that's an interesting point about, which is often the case, is that many of the things that we consider to be innovations today actually been around for some time.
And so let's just explore that idea a bit of your point around maybe what we should be doing is looking for assessments that are more interactive and help us measure new things rather than just trying to protect the old way of doing things. so where, I your point about… you know, and what's the challenge around? Cause a lot of people worry about, okay, if we're going to do something new, how are we going to know it's valid? And you've got lots of AI tools that come out avatars. So you've got this idea of, well we've got the technology now that can make it a bit more interactive. That sounds good. But the worry for most people is, what I don't want to do is throw out the baby with the bathwater.
Alan: Absolutely. On this one. So what's the things that people have got to get right? So maybe if I just paint a couple of examples of what we might do and then to your point about how you know it's working, actually that's the fundamental. You shouldn't be doing anything if you don't know it's working.. In working by that, I mean.
Is it valid, fair, reliable, standardised, et cetera, all of those things? As well as is it a good experience that people have to do? So that's working in my view. So we'll come back to that. In terms of how you might do that, there's sort of relatively simple things you can do. I think, there any jobs for psychologists to write verbal reasoning items? No. I think we're probably done with that. Are there jobs for people to create really good video or immersive questions that have whether it's AI-generated or a person talking through and then you react immediately, probably there are.
So at the simplest level, can we create some more immersive video-driven content to respond to in real time? So I think verbal reasoning questions are probably old hat really. Video interviews are quite tricky. There's certainly lots of cases of people using language models to sort of get good answers to…
Robert: Yeah, I'm hearing a lot of now.
Alan: So some of these formats like that, you know, there's some challenges with. But that doesn't mean there's not going to be loads of work for people in the assessment industry. It's a great opportunity, right? We can create some relatively simple, more immersive content. We can go a whole nine yards and go, right, can I create a, you know, a virtual boardroom where I sort of present my analysis of whatever case study you've given me? And by the way, I don't care if you use ChatGPT while you do that analysis. I want you to tell us what you think then you might have two or three AI personas that interrogate you, straight ask you nice questions, but are experts in different areas. And it means you take all of the best things from an assessment centre, but you bring it into a scalable, digitised format.
So there are many, many things in between. think when you think about that practically, go, you could see how that would work that you present some information, you get asked questions by different characters. It's a live experience. It's quite, quite interactive you could see how that those kind of things might work and the fact that real time makes it relatively hard to cheat even if you a bit of proctoring it would be light a touch right so that would be I think quite a positive experience if done well the content would feel like being in a live assessment.
And then you need to validate it like we know how to validate assessment centres already. We know how to validate psychometric tests already. So we should apply the same principles. Right, we need to get 100, 200 people who have been hired and work out who got high schools here and how they're performing and validate it exactly the same as everything else. And then we need to do the adverse impact analysis and check that it's fair, et cetera we should still be doing the same research.
Just like you wouldn't, hopefully if you were in the medical field, you wouldn't bring a drug to market without it getting approval, right? It's the same logic. It's that we need to do the scientific work to prove it works. But that doesn't mean we should sit on our hands and do nothing.
Robert: Yeah, well that's right. And I'll come back to the validity piece, because I think that's important when we think about how these things might develop. But just to explore that point about simulations and just how we make it interactive, because that clearly is going to be best way to counter AI and rather than fighting AI with AI, what we want to do is how do we make this more positive. And I know you're quite a fan of this idea of doing tasks then and so simulation is a kind of version of that.
And I'd be interested in your take on this sort of two elements to this. You've got one element of a simulation that you're leading to there in a boardroom, where clearly somebody's got experience and you want that experience to be explored using a subject matter expert. But there are plenty of other roles where we're looking for transferable skills. And what you want is to know somebody's raw capabilities that would enable them to acquire a skill.
Absolutely. In which case, YOU want the simulation to be more generic, I'm assuming, than...
Alan: Yes, I think that's right. If you think about early careers hiring of the various formats, your point, that's absolutely what you would want to have. There's also a subtlety in there which we haven't touched on, which is around, I think it used to be called trainability or whatever you want to call. But basically giving somebody a task giving them the opportunity to learn it, and then getting them to deploy it in a slightly harder version. This doesn't have to be just one cross-sectional moment in time either. You can give people experiences to do and tasks to do and see how well they learn during the task.
All of these things are feasible in theory.
Robert: And you can make it interactive now.
Alan: Exactly, right. So there's lots of things that can be done practically. And I think to my own point, the technology is there. I think if we… There's over-indexing, particularly when you get back to CVs and interviews, which is even worse. There's over-indexing in measuring people based on what they say. I think, you know, whether it's real life or politics, we could probably take a general view that that isn't always the best thing to do. And people, you know, fabricate things or they sort of over exaggerate things, blah, blah, blah, blah. There are all of the issues with that.
Measuring what people do, broadly speaking, i's actually quite a good validity and good results, but it's also very tangible. Yes. And it has very good credibility with hiring managers, I think, as well, as well as actually doing the job well. So the more we can sort of get into that, let's measure people doing tasks, whether it's learning to do things and showing potential or it's showing their capability on the core tasks we're asking of them, that's going to… do a really good job of measuring it because it really is similar to the job. It's got high content validity, but it's also going to have natural credibility as well. So it actually could make the whole experience feel a lot better.
Robert: Well, it could, and I think that's very interesting because I think the other thing is what you lead to at the beginning around this as well is can we measure… different things or things that we know are going to be important that perhaps the old model of where we did assessments were quite hard. so trainability, we actually at Arctic Shores refer to that sort of learning agility.
Yes, exactly. Similar things, very equivalent constructs. And so things like learning agility and that all that trainability, you can't just ask somebody how well they learn. You actually need to get them to do something, don't you?
Alan: You do, but I think there's also a very, there's a very specific advantage of using, and again, this is work to be done, right, but using AI to help do the scoring of whether somebody's given a good answer or not. You can give an open response and we can analyse it and say, right, your response was blah, know, whatever you said, right, or what you did. We get to analyse that and say, what came out there was a four out of five based on everybody else. So the ability to reliably do that is the opportunity that AI gives. That's already been demonstrated in video interview analysis. We were doing that at Sova. Other people were as well. So that's already a proven capability.
But that means you can have much more open answers. And that gets you into the interesting space where, you if you think like the World Economic Forum, things about skills, et cetera, know, creativity, creativity has always been impossible to do a decent assessment of. Because by definition, the answers are relatively novel. So you can't put it in a multiple choice questionnaire, right? Similarly, critical thinking across multiple sources of information.
You can model that in a, you know, like an analysis case study or an assessment centre with humans assessing it, but it's quite hard to least semi-automate the scoring of that. I think we're in a different space now, we can start to assess things of that nature and what we need to demonstrate the reliability of doing it, but I think the emerging evidence is that language models can be pretty good at reliably assessing things.
Robert: Yes, I'd like to explore that because there is a lot of discussion about how large language models using generative AI can start to assess capabilities of candidates, CVs, backgrounds. And we obviously went through the whole Amazon story back here, whenever it was 2015, where they revealed if your training data is biased, you've got to watch out, then as to what AI is up to.
And we have the EU AI Act that is quite understandably putting some oversight and guidelines on where AI is making a decision, and where humans are making a decision. So what are the things in your mind, if we are going to use AI in that assessment piece, what are the things that are acceptable to use AI for and what's not.
Alan: Well, I'd go back to what does it work. Does it work? We have got the evidence base that it works for the objective, measuring a particular thing? So if you kind of wind back a step, you kind of go, right, we've got, so in the UK we've got British Psychological Society test reviews for things like personality questionnaires, ability tests, and there's a very robust, pretty in-depth process. It took quite a long time for it to get legs and get going. It's a bit bureaucratic, but it's probably a good thing that it's there, I would say, right? And it's certainly clear about what people are looking for.
Meanwhile, 95 % plus of all hiring is done by interview, which is like the worst possible thing in terms of fair, certainly unstructured interviews, poor validity, but also terrible from a fairness point of view. So there's a huge amount of sin going on over here that nobody's doing anything about in wilfully ignoring. We're really robust about some things here, which is fine because they're more scalable, right? And then we've got other types of assessment where nobody's worked out what to do.
I think that one positive move in in recent weeks and months actually as the Association of Business Psychologists is putting a group together with the intent of creating some clear guidelines about this topic in doing it quite quickly. So not doing like we're gonna spend two years thinking about it It's like what are the basics that we probably need the whole market to think about and get that out there So there's some activity underway now, which is probably positive. And yes, it might become more concrete later, but at least there would be a position which I think would be a really good start because otherwise it's the Wild West and that's no good really.
Robert: Well, it is and so this as you were saying that there are clearly things that AI can do better than humans and so you're seeing a lot of people going okay well if AI can do this better and lots of people talking about ethical AI and putting guardrails in there and proprietary models that have got defined training data that is representative. Then do those cover all the elements or is there more to it, I suppose, than just saying, oh, I've got an ethical AI design on this, and the training data looks okay.
But there's a lot of stuff where people are just saying, OK, well, I'm going to use AI now to pass a CV and start. Then they're starting to make decisions at that point. It's moving beyond just the recommendation.
Alan: I think there's a couple of points there. So the first one is you've got process and outcome. So quite a lot of that, as you were describing it, is around the process of how AI is trained and whatever. A lot of the things we're talking about, people probably aren't even training the AI, what they're doing is they're prompting an existing model to do some work for them within an existing app.
So it’s not quite the same thing anyway, but the process of doing it, yes, you want to be clear and transparent about what the process is as far as you can, but fundamentally, I want to see the outcomes. So, don't care how lovely that is, where's the outcome that shows me it's valid, it's fair, et cetera.
Robert: Well, and in order to demonstrate that it's valid and it's fair, the process has to be transparent.
Alan: Yes, and you need that to kind of get to that, but the two things hit together, but you have to do the legwork afterwards. That's always been the case actually in occupational psychology for a long time, but it's like that hasn't changed. It's not going to. It's fundamental. The principles of the good selection remain the same. Apply it intelligently to what you're doing, but that's the principle that's the same. And I think in a way, legacy of companies like SHL training half of the HR world in level A and B back in the day, used to be like a very nice week's course and lots of lunch, but whatever.
But the point is that set a bit of a buyer expectation that buyers were pretty clued up about knowing what good looked like. We probably need to do the modernised version of that. I'm not sure we're going to get the week's training course, we want to have that knowledge base so that all buyers are pretty clued up about what they're doing and they can ask the right questions and interrogate what the options are. And I think that was probably one of the most helpful things we could do.
Robert: And some of those things about the questions that they should be asking to interrogate around this, because there is a little bit of the Wild West and it'd be interesting to see if the ABP is covering some of these elements, because you've got the EU AI Act that classifies what is AI and therefore what is also then high risk, low risk around that.
But if you're a TA leader and somebody's coming along and there is, you've got a new feature coming out on XYZ-ATS that says put in your job description, we will scan that for key skills and then we're going to go out and scan the internet and we're going to source for you 10 top candidates that will meet your requirement. Isn't that amazing? And we've done it ethically, it's all been done ethically and it's built on your DOB description. Is that okay still?
Alan: I think you give a really good example. So you kind of have that piece where, what's this being used for? that is making a decision. You've made a decision to shortlist 10 people. And so the point being is those principles apply to anything that's obviously a clear decision of some description. Whether that's a shortlisting decision or a final decision, it's still a decision. And so it's sort of just trying to gently come in from the outside, get around the need to assess people properly, and jump to a quick, easy answer. And then you think, well, what's the data that we've got?
So we've interrogated your, your database or your performance management data. Most of the performance data is driven obviously by human judgment. So inherently most people... kind of... Most people say...
Robert: Subjectiveness.
Alan: it's subjective and also quite inaccurate. So we've got a load of pretty biased data that is driven by that. You've got all of the things around... What's in the job description is the job description itself inherently biased and there's lots of stuff that's last change. There's so many things going on there that day, unless you've done the outcome work, you don't know, well, you can probably make a pretty good guess it's gonna be quite biased, but the point is nobody's looked.
Right, so unless we've done the work, we're just letting things kind of come in through the back door here. And I think if you're using tools for sourcing that are really making decisions to a degree, you've got to apply some rigor to that. And I think that's a big gap. In the EU Act, obviously it's trying to regulate the whole industry. It's not each vertical is gonna need to have more subtlety and nuance about what you do in that particular area. So TA needs its own ways of doing things that are probably slightly different to medicine or whatever it might be or manufacturing or something. What matters in our space.
Robert: And I think that's what's really interesting. I am very much in favor of the EU AI Act because it's forcing us to put that rigor and think about and identify what's at risk, because as you say, in a lot of cases in our world, we won't know the problems and the issues that we've inadvertently let into the system for many years afterwards that is something that we've got to be very acutely and acutely aware of.
Alan: Yes, there was a discussion I had a while back with Simon Defoe, Global Talent Assessment Manager at Vodafone there, and he gave the example of healthcare as a corollary and he said, look, in healthcare there are some really good uses of AI now, know, particularly for like in radiography and cancer detection and whole bunch of stuff like this. There's a methodology from that that's quite relevant to what we need to do. And his first point was, if we can do it in healthcare, why can't we do it in TA? Because it's obviously a lot less dangerous.
So there are lessons below from other individuals about how to think about these problems. But one of them is can you show the outcome data in the way I've described? The other one is, are you confident that you haven't got some dangerous edge cases?
Because obviously in that context that's really important. In ours again we need to know that even if the broad brush data looks okay, have we really messed it up for some group over here? That's right, yeah. So we just need, that take the time, but that should be the responsible way of doing it. You can't just do nothing. But I think you have to be doing the analysis to demonstrate it and get good at it, right?
Robert: Yes. And I think that's the point, isn't it, that you've got to have rigour and analysis to this, which you would do in healthcare. I think that's, you know, the analogy for me works with healthcare because you wouldn't just introduce something and say, look, I've discovered this new drug. I think it's going to be really good. And I've spent, you know, however ethically you've done the setup there, I'm just going to put it out there, and then we'll wait to see the results in three or four years time.
You're required to put that rigour and analysis up front before you. I think that's what, know, that's why I love the profession of occupational psychology, because we have that rigor and discipline about how that outcome should be arrived at. And I think that's the important thing. It's the outcome and the process = of how we get to that, and that we know that we've got to a fair valid, repeatable.
Alan: Yeah, and it might, once it's not going to be perfect, you want to manage risk, especially, but to give you an example, you think for most companies, you know, let's say 70 % of their cost base is people. If you spend, you know, 50,000 pounds a year or 100K a year or whatever on a on a piece of software, there's probably a reasonable amount of scrutiny goes on and you look at some different options and one or two different technical teams analyse it and check the info security and blah, blah, blah, you know, which we both know from being on the sales side of these kinds of things.
There's certain levels of rigour in purchasing. When people in hiring, weirdly, the more senior it gets, the worse it gets the thing often, you know, in a… friend of mine who's a commercial director in an energy company was telling this to me the other day. He said, Okay, so I was given the last two CVs by the hiring manager. And he said, which one of the two do you want? And I've got like, and I got the CVs five minutes before I got to do the interview. Like there's no preparation time. It's not unusual, don't think. But his point was, it's like we're making a decision that's probably, you know, for a fairly well-paid particular senior manager role, we're investing several hundred thousand pounds over a few years for this role. And the level of scrutiny is so light. So there are huge amounts of money and investment going to people without much scrutiny at all. I'm not saying we should go insane over it, its businesses are very willfully spending a lot of cash on hiring people without really doing even some slightly basic things to just manage that a bit better.
Robert: Yeah. And I really like that example because I've seen quite a lot of stats that have been put out there by all the cost of a bad hire is X, Y, Z. And I'd been speaking to somebody the other day who was also taking this approach of, hang on, can we need to flip this on its head and saying, well, the typical time an employee will be with our organisation is four years and that they can show that.
The average salary is 70,000 pounds. So over four years, that's 280,000 pounds. So that's the investment and then you put other things on top of it. you you're getting northwards towards a half a million pound investment for employee. Why would we not spend a few thousand pounds? Because we would do for insurance if we're building on that, you know, an infrastructure. Why would we not take the same approach to investing and doing that with rigour and giving us additional data. And actually that seems to be a more powerful, than the cost of a bad hire because people just go, well, no process is perfect. We're always gonna have bad hires on all of…
Alan: Well, I think that's right. Because I think in that example, say, how much trouble, let's assume there's a slightly more positive culture in this, but nevertheless, how much trouble would the person get into if they bought a bit of software or a bit of kit for that value and did like literally you know, hardly any diligence. If then it fails, they'd be like, oh, it wasn't my fault, Gov, you know, it's like, really? You know, that's quite disappointing. Yes, yes.
Robert: So you think then AI has got clearly a place to play in how we think about recruitment and selection. So where would you, where are you advising your clients to use AI at the moment? Are you, because we've still got some way to work out on some of the assessment and selection pieces. But where else then are you?
Alan: Yes, I think that's interesting. I think there are a couple, there's some steps in the TA journey. So there is a, and all of these are emergent and there's different tools and you know, they all need to think about. The first one is helping clients think about what they've got and what you know, as per earlier the conversation, the defensive side of where might we have issues. So you've got to kind of just do a bit of a… pick up the carpet and see what's underneath kind of problem, go like, okay, honestly look at where the challenges might be. How confident are we really in the validity of who we're hiring now? So you've to just have a look at that.
Robert: Yeah. And I think you're right on that. Too many people don't do that. They just kind of go, Oh, I've got a problem with video interviews or I've got a problem with recording interviews and actually back to your earlier point about the output on that looking under the hood, you've got to look at all elements.
Alan: Well yeah, you don't need to ball the ocean in terms of the solution but you do need to have a look. That's right. End to end. We're trying to basically hire the right people to make the organisation work really well. The process is there to serve that objective what's going on at these different steps. And there's probably more than one place where it's not quite working.
Robert: Because I to talk about a golden thread. Yeah, yeah, absolutely. you can lose that connection. Because people have tinkered in different bits.
Alan: Yeah, and it might not be the bit that you've seen, might not be where it's broken. Yes, that's right. So you've got to obviously look at that systemically. So there's a lot of work we're involved in helping clients review where they're at. That's important. In a way, that's the easy bit. It's obviously easy to say, where's there a problem? The harder bit, but probably more important, is what can we then do about it? And you mentioned sourcing. What's the mechanism by which we source and engage candidates in a way that gets us a good group of people properly? How do we then screen them cost-efficiently, but in a way that has got integrity?
And then how are we going to make the actual selection? so there's, you obviously have a multi-layered process, but what does that look like? And I think that's where some of the innovation comes in of how do we do that analysis of what role looks like and what we need to hire. And you mentioned skills earlier. So what's the skills profile look like for a particular role in doing that well? And that's an emergent area where AI has got a really good role to play about quickly skilled profiling.
Then you've got how do we do the assessment, how do we deliver it, whether it's these different types of immersive mechanisms I was describing. There's lots of ways you do it, but how you deliver it and make it a good experience and make it much more engaging, that's a key area.
How you then use AI to support scoring, probably with a human in the loop, as the EU expects anyway. But that's key. And that's probably the most scientifically driven bit and possibly the hardest bit, actually. Because other people are solving all these problems. I don't need to invent tools to create personas or do voice recognition or other people are doing that already. It's about how we do our bit.
Robert: It is, and we mustn't get lazy about that.
Alan: And so we've got to really be good at that bit. And then the other piece is, what do we do around reporting and feedback whether that's tools to help individuals engage with their feedback in a much more meaningful way, that again, to your earlier point, is task-oriented. So it's like, want to get promoted or I want to get good at X, how can I do that? know, and there's things we're doing around that. Or the talent intelligence piece.
How do I know that this individual could do role A, role B, or with the right development, role C. There's multiple points of intervention. All of these stages are not working brilliantly in most organisations right now. So you kind of want to start to join them all up, you would have said. So there's quite a bit to aim for, but we can probably pick off one or two of those first and demonstrate how AI can be used responsibly to do those things well. So just like in the healthcare thing.
You're not going to release a cancer detection AI tool until you've proven it is safe and it works. Same principle, know, slightly lower state, thankfully, but we still have to prove that it's fair and predictive. And once you've nailed that, then we're off to the races and off we go. Yeah. So I think that's the next step.
Robert: Great, great summary. And I think that's incredibly useful for people to think about how do you look at all those things? You don't, as you say, have to boil the ocean on all of this, you can look at elements and see where they're not joined up in the way that they should be joined up and making sure that there is human oversight and I particularly like your point about reporting because for me the reporting is also on the outcomes and there needs to be rigorous reporting on that, it's not just, oh have we got quality of hares on this being sure that the trends aren't changing in terms of the demographics on that and the edge cases, as you've said.
Al, this has been a, as I knew it would be, a fascinating discussion. Lots of great insights from you, lots of really good thoughts about how this is developing. And I very much look forward to continuing our discussion, continuing the disruption to the status quo that we bring to the sector. Thank you very much for coming on the podcast.
Alan: Thank you.