Monday, 29 August 2011

Are job selection methods actually measuring 'ability to identify criteria'?



While we know that modern selection procedures such as ability tests and structured interviews are successful in predicting job performance, it's much less clear how they pull off those predictions. The occupational psychology process – and thus our belief system of how things work - is essentially a) identify what the job needs b) distil this to measurable dimensions c) assess performance on your dimensions. But a recent review article by Martin Kleinman and colleagues suggests that in some cases, we may largely be assessing something else: the “ability to identify criteria”.

The review unpacks a field of research that recognises that people aren't passive when being assessed. Candidates try to squirrel out what they are being asked to do, or even who they are being asked to be, and funnel their energies towards that. When the situation is ambiguous, a so-called “weak” situation, those better at squirrelling – those with high “ability to identify criteria” (ATIC) - will put on the right performance, and those that are worse will put on Peer Gynt for the panto crowd.

Some people are better at guessing what an assessment is measuring than others, so in itself ATIC is a real phenomenon. And the research shows that higher ATIC scores are associated with higher overall assessment performance, and better scores specifically on the dimensions they correctly guess. ATIC clearly has a 'figuring-out' element, so we might suspect its effects are an artefact of it being strongly associated with cognitive ability, itself associated with better performance in many types of assessment. But if anything the evidence works the other way. ATIC has an effect over and above cognitive ability, and it seems possible that cognitive ability buffs assessment scores mainly due to its contribution to the ATIC effect.

In a recent study, ATIC, assessment performance, and candidate job performance were examined within a single selection scenario. Remarkably it found that job performance correlated better with ATIC than it did with the assessment scores themselves. In fact, the relationship between assessment scores and job performance became insignificant after controlling for ATIC. This offers the provocative possibility that the main reason assessments are useful is as a window into ATIC, which the authors consider “the cognitive component of social competence in selection situations”. After all, many modern jobs, particularly managerial ones, depend upon figuring out what a social situation demands of you.

So what to make of this, especially if you are an assessment practitioner? We must be realistic about what we are really assessing, which in no small part is 'figuring out the rules of the game'. If you're unhappy about that, there's a simple way to wipe out the ATIC effect: making the assessed dimensions transparent, turning the weak situation into a strong, unambiguous one. Losing the contamination of ATIC leads to more accurate measures of the individual dimensions you decided were important. But overall your prediction of job performance measures will be weaker, because you've lost the ATIC factor which does genuinely seem to matter. And while no-one is suggesting that it is all that matters in the job, it may be the aspect of work that assessments are best positioned to pick up.

ResearchBlogging.orgKleinmann, M., Ingold, P., Lievens, F., Jansen, A., Melchers, K., & Konig, C. (2011). A different look at why selection procedures work: The role of candidates' ability to identify criteria Organizational Psychology Review, 1 (2), 128-146 DOI: 10.1177/2041386610387000

8 comments:

  1. Very interesting. Doesn't this raise the question of what the "job performance measure" really measures? Assuming it is based on feedback from colleagues/superiors, or formal criteria, perhaps it also is another proxy for ATIC? "Ability to identify performance assessment criteria".

    This could be harmful if performance metrics are "gamed". Perhaps it would be interesting to see if there is any evidence that those with high AITC score worse on covert metrics of performance than those with lower AITC? Or on metrics which are important but may be discounted because of low marginal risk/long time frames/lack of external monitoring. For example animal welfare in a lab environment, following the crowd into high-risk behavior (e.g. toxic assets)...

    ReplyDelete
  2. Hi John

    This is an excellent point, and one I'd like to return to on the digest. Every occupational psychologist recognises that obtaining solid, meaningful performance data is one of the biggest challenges in the industry. The risk is of course settling on data that is measurable and collectable without comprehensively reflecting what matters in the job - in such a context, 'gaming' the system could result in perverse effects.

    I like your suggestions of covert metric measurement - no studies spring to mind that really capture this, but it's something to look out for.

    Best
    Alex

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete
  5. This comment has been removed by a blog administrator.

    ReplyDelete
  6. By focusing on both the look and the usability of the site, we can design it so that browsing it is a pleasurable experience.

    ReplyDelete
  7. Exams for the 11th grade were held by the Central Board of Secondary Education. CBSC 11th Class Previous Question Paper 2023 is keenly awaited by people who took CBSC examinations. For CBSC Sample Paper 2023, we have provided a direct link. CBSE 11th Sample Paper 2023 On the CBSC official website, you can also get the CBSC 11th Important Question Paper 2023. The official CBSC webpage CBSE stands for Central Board of Secondary Education. It is the secondary and post-secondary education board. The federal government is in charge of it.

    ReplyDelete