Open any report from the World Economic Forum, McKinsey, or Harvard Business Review, and you’ll find one capability that everyone agrees is vital for the AI-enabled workplace: employees' capability to learn, relearn, and unlearn skills in continuous cycles.
Why?
Boston Consulting Group showed that AI and other tech innovations are reducing the shelf life of many skills to just 2.5 years. Meaning that by the time you’ve attracted, hired, onboarded and ramped new hires, the skills needed to be successful in their role are likely changing.
In this new world, what matters most isn’t a huge shopping list of hard skills. It’s the capability to learn, relearn, and unlearn skills in continuous cycles.
We call this capability Learning Agility: how effectively someone learns, pivots, and keeps making progress when the work changes and skill requirements inevitably shift.
Learning Agility is the capacity to pick up new parts of the job, transfer them to new situations, and adjust when the ground moves.
And that makes it one of the critical capabilities that candidates need to be successful in the AI era.
Until now, there hasn't been a good way to measure true learning behaviours in a live environment. So most companies have historically used proxy measures to evaluate learning agility.
The best options we’ve had are:
But this approach relies on candidates telling you they’re agile learners, not showing you they are — meaning candidates can coach themselves on how to ace the questions. And now, they can simply feed the questions into an AI model to score even higher.
Again, ChatGPT can complete these reports in minutes, scoring 98.8% with no specialist prompting. They’re also a blunt measure, missing behaviours that make a difference in new situations: problem-solving, strategy-switching, and reflective learning.
Research shows that educational attainment is a poor proxy for actual learning agility:
“Different methods and combinations of methods have very different validities for predicting future job performance. Some, such as person-job fit, person-organization fit, and amount of education, have low validity.”
— Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). The Validity and Utility of Selection Methods in Personnel Psychology: Updated Tables and a Framework for Assessing Construct Validity. Personnel Psychology, 69(1), 1–55.
Added to this, grade inflation has now made educational attainment even less effective in determining individual differences in learning capability.
Over 70% of UK university students achieve a first-class or upper second-class (2:1) degree, according to the ONS. So relying on grades alone won’t help you narrow down the talent pool or differentiate between which candidates do and don’t have what it takes to thrive in an AI-enabled workplace. Especially where they need to develop new skills in a matter of months. A degree simply doesn’t test for that.
The bottom line is that while AI has increased the need for candidates to be agile learners, there hasn’t been an effective tool for measuring this capability.
Which makes the challenge to solve this:
How do I uncover candidates’ learning agility at the start of the process; giving myself more high-quality candidates who can learn quickly, while also saving time with automation?
To assess candidates’ learning agility, we set out to create an environment where candidates had to learn in 'real-time' — and we could observe how they adapted as they progressed through the task.
As candidates complete the task, we measure how they interpret, learn and understand relationships between ambiguous information, integrate feedback, and use this to anticipate new situations.
This gives us an accurate, scientifically robust, unbiased view of candidates' actual Learning Agility.
To see the full version of the task, please click here for a demo.
Source: Arctic Shores - Learning Agility task
A scientifically valid and robust score for each candidate. Plus, insights you can use for sifting, interviews, onboarding and internal mobility.
Making this measure valuable well beyond the first sift.
The task-based approach gives four advantages to any TA team wanting meaningful differentiation between large volumes of candidates at the top of the funnel.
Don’t take someone’s word for it that they “learn fast”. Our task observes real behaviour in real time — we’ll help you understand how candidates pick up new rules, use feedback, and switch strategies to keep making progress. Giving you a clear signal, built on primary evidence, about which candidates are likely to keep up with rapidly shifting skill requirements from the very first sift.
Unlike text-based inputs (CVs, application answers, self-report questionnaires, or aptitude tests), AI tools can’t complete our task. We’ve tested it. So you can feel confident that you’re getting an accurate measure of Learning Agility, not an AI-inflated one.
Because we can sift out as much as 70% of your talent pool with no bias and no adverse impact, you save 100s of hours on manual sifting. Letting you spend more time nurturing the highest quality candidates. Plus, candidates from non-traditional paths also get an equal shot at showing their potential. The analysis shows this approach creates more diverse shortlists faster, while keeping standards high.
The task is scientifically valid and reliable, having been created in line with British Psychological Society standards. Transparent scoring makes decisions simple to explain to candidates and hiring managers — as well as GDPR and EU AI Act compliant. In short, our task assists you and your hiring managers in making better quality hiring decisions. But it never decides for you, meaning you stay in control of your process with no black boxes or unexplainable outcomes.
A perk of being an Arctic Shores customer is early access to new science-led measures.
The Learning Agility task isn’t sitting in a lab. It’s already in customers’ hands, informing decisions where the stakes are high:
In short, customers are using Learning Agility now, across roles where quality really matters.
That early, practical exposure is how innovation makes its way into fair, faster, more future-fit hiring.
TA used to optimise for what people know; in the AI era, we must optimise for how people learn. The shift is from text to tasks, from claims to behaviour, from brittle proxies to explainable evidence.
Learning Agility becomes the first measure. Letting you then assess values and motivations later in the process, knowing that your candidates already have the core capability to succeed in the role.
Measure that early and fairly, and you build teams that get ahead when the work inevitably changes.
Interested in future-proofing your hiring? Sign up today for a short, practical session, where we’ll cover: