Liam Healy & Associates

chartered occupational psychologists

Validity of Selection and Assessment Tools : A Discussion

 

Whereas reliability concerns the accuracy with which an instrument measures something, validity is concerned with what is being measured. The two are related because before we can talk about what we are measuring we must have a reliable measure of it - because of this  validity depends on reliability.  

The notion of validity is  more difficult to get to grips with than that of reliability, the  British Psychological Society has defined it thus:   Validity is the extent to which a test measures what it claims to measure - the extent to which it is possible to make appropriate inferences from the test scores (BPS Steering Committee on Test Standards).

What this means is that when we claim a measure has good validity for selecting people for a particular job, we must show that there is link between what it measures and the sort of things that people do in that job. So when we talk about validity we are concerned with whether a measure actually measures what it claims to. The concept of validity is not an absolute - simply saying that a measure is valid is a misnomer because it will only be valid in certain situations i.e. a numerical ability test might be valid for selecting book-keeping clerks, but it would not be valid for selecting English teachers.   There are three categories of evidence of validity, content-related; construct-related and criterion-related (note that these are not three types of validity, just three types of evidence).

Criterion related validity provides the most powerful source of evidence for the predictive power of a selection system. It is the most important and most widely used evidence of validity, and is concerned with comparing the test score to some external criterion such as job performance. There are two main methods of calculating criterion-related validity: predictive and concurrent, with predictive validity being the most powerful.

Content validity is not a statistical concept and so cannot be described in terms of a correlation co-efficient, nor does it provide us with proof of validity since it is part of the system development process.

Like content validity, construct validity is not a statistical concept and so cannot be described in terms of a correlation co-efficient.

Two types of validity which you will come across are Face and Faith validity. Neither have any theoretical or statistical basis and provide no evidence as to what a measure is actually measuring.

Face validity refers to the acceptability of the measure to the candidate and what the candidate thinks the measure is measuring. This is important in obtaining the candidate’s co-operation and commitment, however, because a measure looks good means nothing in terms of validity.

Faith validity refers to the same types of belief but in this case they are held by the recruiter.  While these are important in the marketing of selection tools (because nobody will use a measure if they don’t believe in it)  they are not true sources of evidence of validity and you should be very wary of any suggestion that they are.

Please read the sections on selection system audit and direct and indirect discrimination for a discussion on why validity is so important.