Want to hear more from us?

Subscribe

What does a good psychometric test look like?

Wednesday 17th June

What does a good psychometric test look like?
Gauging good psychometrics

A psychometric test can be entirely valid, without necessarily pushing the envelope in terms of innovation. Of course, the reverse is also true – some tests, striving to stand out in a market now worth more than $2bn, might sacrifice validity for novelty. It’s up to you, in this growing field, to decide what works best for you and your needs. 

But that’s no easy ask. How can you know what’s truly innovative, while also identifying tests that are valid and reliable? Here’s an outline of what we see as industry-leading, as well as our thoughts on what to look out for from an entirely valid assessment.

Innovation

For a long time, the most innovative thing that had happened in the field of psychometrics was to move the questionnaire format online. As we spoke about in the previous chapter, though, this is now more or less par for the course. Online formats are now the norm; the expectation. They’re not pulling up too many trees. 

Let’s look at two areas of the field that are, however. One refers to assessment format, and the other to our understanding of psychology and cognitive science:

Format of a Psychometric Assessment

As discussed in the previous section, the self-report test format has its drawbacks, and effectively places a ceiling on how sophisticated your hiring data can be.

Arguably the biggest innovation in the industry in terms of format is the effective removal of the self-report, question/answer element. Some providers have instead identified ways to measure real candidate behaviour, by using interactive, game-like interfaces that identify thousands of data points on how candidates respond to specific stimuli in real time. These behaviour-based assessments are often rooted in well-established neuroscience tasks, and so represent an effective blend of the tried-and-tested, and the truly innovative.   

As we’ve noted, the most efficient and effective way to hire is to use data that can truly predict performance. By allowing you to see how candidates respond in work-relevant scenarios, rather than rely on self-reported data, behaviour-based assessments can support more predictive and intelligent hiring.

It’s also worth bearing in mind their positive impact on candidate experience. We established in the previous section how self-report tests fail to induce any real cognitive flow, and instead mostly breed anxiety. Behaviour-based assessments are far better at encouraging intuitive responses, alleviating the feeling that they’re under the microscope. This allows for more authenticity in a less stressful environment.

Psychology & Cognitive Science

This innovation of format also feeds into wider developments in psychology and cognitive science. When you assess behaviour, you can measure a wider range of psychological constructs (such as Risk Aversion, or Sensitivity to Reward, both of which are difficult to judge from self-report formats). 

This means that the use of a behaviour-based format allows you to capitalise on all the great work currently ongoing in neuroscience and psychology. Advancements here are seeing new constructs created, as well as faster, more effective ways to measure existing ones. In this sense, assessments that offer you the potential to capture the full range of candidate behaviour should be viewed as innovative.

For more information on advancement in the field of neuroscience (and how this applies to your hiring strategy), we’ve put together a paper with international consulting firm Korn Ferry. You can download either an executive summary or the full paper here

Reliability & Validity of Psychometric Assessments

As noted previously, it’s not enough to find an innovative assessment. To be certain of its likely value, you need to establish its reliability and validity. Let’s define these quickly: 

Reliability

This relates to consistency of outcomes. If you’re asked what your favourite colour is, you’re likely to arrive at the same answer every time (making this question a reliable one). 

Another example would be a rubber ruler in a warm room. As it expands in the heat, the measurements will change, and your data could potentially differ each time. This lack of consistency makes the ruler an unreliable one.

Validity

This is slightly more tricky. Effectively, validity relates to the relationship between what a test says it measures and what is ‘real’.

It’s an important aspect for you to establish to ensure your investment will actually have the desired positive effect. There are numerous types of validity, some more rigorous than others. We’ve tried to de-jargon some of these for you:

Face validity

Does the test look like it does what it should. This is a relatively soft form of validity, and you shouldn’t set your stall by this alone. 

Content validity

Similar to face validity, but takes on a new level of detail. Effectively, it asks whether a test contains the kinds of aspects it should do, given the things it says it does).

Construct validity

Now we get to the more rigorous side of the scale. In short, this relates to whether a test measures what it says it measures. This can be established through statistical modelling, as well as comparisons with other, previously validated tests. 

Criterion validity

Frustratingly, there are two subtypes within criterion validity – predictive and concurrent.

Predictive – ‘Does this test measure something of interest in the future?’

Concurrent – ‘Do scores in this test correspond with something else of interest that we can measure now?’

Previous/Next Chapter

Read Next

Sign up for our newsletter to be notified as soon as our next research piece drops.

Join over 2,000 disruptive TA leaders and get insights into the latest trends turning TA on its head in your inbox, every week