Friday, 13 January 2012

2012 resolution: make better selection decisions

A simple resolution, but how to go about it?

1. Review practices to align with your organisation's unique context. As a whole, companies using 'best practice' approaches such as ability tests, structured interviews and monitoring recruitment sources do no better on aggregate than those who don't use these methods. This tells us that it isn't about slavishly following a right formula, but evaluating what's been proven to work elsewhere with your understanding of the local context of your organisation. So consider the below recommendations in this light.

2. Consider introducing well-designed, low effort assessments. There is research to suggest that automated assessments such as tests of knowledge or situational judgement, when well-designed, can do the job virtually as well as more intensive face-to-face assessment. Again, this will depend on your organisation and industry, but it may bear exploring for you.

3. Develop a policy on checking out job applicants online. Recruiters can find it tempting to google applicants or peruse them on social networking sites, getting free, quickly accessible, and otherwise hidden information about them. But there are questions about its fairness, risk of generating feelings of invasiveness, and possibility that it leads to decisions being made that aren't defensible. It's probably already going on in your organisation, so establish some ground rules for how you approach it.

4. Provide focused training to people who play roles in assessment simulations. In particular, evidence suggests focused training helps role players to introduce pre-determined prompts to nudge candidates into showing (or failing to show) critical behaviours; it appears that this may lead to more accurate ratings in some areas.

5. Be realistic about what you are actually measuring. Interview overall scores are strongly influenced by the picture gained from the early minutes where rapport is built. Happily, it seems that this isn't simply bias, but reflects some good information picked up - for instance, verbal ability, and some personality factors. Why not recognise this, perhaps by assigning quick ratings after that initial period.

Meanwhile, and more alarmingly, some researchers suggest that assessments scores of all kinds are heavily influenced by a personal attribute called 'ability to identify criteria'. Again, ATIC does seem to be a good predictor of workplace success in itself, but in both these examples the point is the risks when we assume we are measuring one thing - e.g., the competency "Leading for Success" when in fact we are measuring another.

And finally....

When I decided to exit research and enter bleary-eyed into the Real World(TM) I was concerned that having a PhD might be a disadvantage. Things turned out ok to me, but it turns out my feelings are well-founded: recruiters do see overqualification as a potential reason not to employ someone. Yet there are a host of reasons why overqualified applicants may be a great add to your organisation. So reconsider how you approach overqualified candidates.

1 comment: