Assessment days for evaluating work-relevant behaviours of applicants or job incumbents often draw on actors to perform as difficult team-members or curious clients in meeting simulations. A recent study has shown that these role-playing actors can be trained to effectively weave pre-written dialogue prompts into the improvised simulations. However, whether this helps measurement of participant behaviours is less clear.Schollaert, E., & Lievens, F. (2011). The Use of Role-Player Prompts in Assessment Center Exercises International Journal of Selection and Assessment, 19 (2), 190-197 DOI: 10.1111/j.1468-2389.2011.00546.x
The study authors Eveline Schollaert and Filip Lievens gave 19 role-players training, which in one condition included explicit guidance on using behaviour-eliciting prompts during assessment exercises; for example, "Mention that you feel bad about it" in order to provoke behaviours relating to a dimension of interpersonal sensitivity. Such prompts are often provided in prep material, but actual usage was unknown. The authors wondered whether role-players could realistically increase their prompt usage through training, or whether this is too much to ask an actor in the thick of a dynamic interaction.
At a subsequent assessment centre, the role-players interacted in simulations with 233 students from Ghent University. Role-players with prompt training were able to incorporate four to five times more prompts than those without such training, an increase from about two prompts per exercise to 10-12.
More prompts ought to elicit more relevant behaviours, so the authors expected observers to get a better picture of true 'candidate' performance. But this isn't clear. In the high-prompt condition, pairs of raters watching the same role-play didn't agree any more on their ratings, suggesting the behaviours remained just as obscured as without prompts. That said, there was better correspondence of some of the ratings to other measurements you would expect to be related - for instance, interpersonal sensitivity correlated better with an Agreeableness personality score acquired pre-centre. But half of the predicted increases in correlation weren't observed.
Regarding their unsupported hypotheses, the authors wonder whether the rating assessors should also have been trained on prompt use to encourage sensitivity to candidate reactions. I have additional concerns on the nature of the assessors -minimally trained masters students - used to draw conclusions about a professionalised domain. Nonetheless, this rare examination of role-player impact on face to face assessments suggests training can generate more dimension-focused contributions, which in turn may result in measurements with more predictive power.