Friday, 16 November 2012

Can you be coached to better outcomes on a situational judgment test?

The Situational judgment test (SJT), which asks respondents to choose their preferred course of action in a workplace scenario, has become a popular way of assessing fit to attributes of a job or organisational culture. It's used by governments, military, polices forces, and for educational selection such as certification of GPs (medical General Practitioners). Like other popular techniques, it has spawned an industry that promises to help people pass them. Can coaching enhance performance on such a test?

Filip Lievens and his team examined this in a real-world context - laboratory studies can lack the motivation to learn that drives coaching's benefits - in the form of August admissions to a Belgian medical school, where candidates take a battery of assessments including an SJT. A challenge is that candidates who seek coaching may differ from their counterparts in ways that could influence their eventual performance, independent of the effect of the coaching itself. Lievens' team addressed this through two routes. Firstly, they used a form of matching called propensity scoring, by which every coached candidate is matched against an uncoached one through deriving scores based on a range of individual factors, including demographic background, career aspirations, previous academic performance, and their tendency to prepare through other means, such as practice tests. Secondly, the team only included candidates who had previously failed the assessments in July, and had not engaged in any coaching prior to July. This meant that the July SJT performance could act as a pre-test measure of how candidates did before coaching was introduced. From a larger sample, Lieven's team ended up with 356 matched candidates that fit the stringent criteria.

Merely examining the August performance, it appeared that coaching did have an effect: matched candidates scored an average of 1.5 points higher, with an effect size of around .3. But by comparing the difference scores of how much candidates improved between July and August, the team found that coached candidates improved by 2.5 points more than uncoached, for an effect size of around .5. This is because the candidates who decided to receive coaching on average had been weaker performers the first time around - possibly one reason they invested in assistance. This effect size is fairly large - a boost of half a standard deviation - especially compared to those for coaching in cognitive tests, which fall between .1-.15.

SJTs are popular with candidates, being intuitive and overtly job-relevant. Employers are also fans: SJTs are strongly predictive of relevant job performance, with incremental value over and above that supplied by ability tests, and have less adverse impact, with demographic groups typically showing small average differences in performance. But this evidence suggests that their results can be influenced by coaching. Does the coaching result in an increase in the underlying ability? It may do, but programs tend to focus on 'teaching to the test' rather than broader ability, meaning results may be distorted. The researchers suggest this needs to be investigated, and that test developers explore different scoring techniques and broaden the attributes assessed by SJTs to make them difficult to exploit.

ResearchBlogging.orgLievens, F., Buyse, T., Sackett, P., & Connelly, B. (2012). The Effects of Coaching on Situational Judgment Tests in High-stakes Selection International Journal of Selection and Assessment, 20 (3), 272-282 DOI: 10.1111/j.1468-2389.2012.00599.x

No comments:

Post a Comment