Showing posts with label recruitment. Show all posts
Showing posts with label recruitment. Show all posts

Thursday, 16 May 2013

Experienced job interviewers are no better than novices at spotting lying candidates




This post was written by Christian Jarrett and originally found on the BPS Research Digest blog.
 
For the penultimate round of the TV show The Apprentice, the competing entrepreneurs must face a series of interviews with a crack team of hardened executives. The implicit, believable message is that these veterans have seen all the interview tricks in the book and will spot any blaggers a mile off. However, a new study provides the reality TV show with a reality check. A team led by Marc-André Reinhard report that experienced job interviewers are in fact no better than novice interviewers at spotting when a candidate is lying.

The researchers filmed 14 volunteers telling the truth about a job they'd really had in the past and then spinning a yarn about time in a job they'd never really had. The volunteers were offered a small monetary reward to boost their motivation. These clips were then played online to 46 highly experienced interviewers (they'd conducted between 21 and 1000 real-life job interviews), 92 interviewers with some experience (they'd interviewed at least once), and 214 students who'd never before acted as a job interviewer. The participants' task was to identify the clips in which the interviewee was speaking truthfully about their work experience, and the clips in which the interviewee was fabricating.

Overall the participants achieved an accuracy rate of 52 per cent - barely above chance performance, which is consistent with a huge literature showing how poor most of us are at spotting deception. But the headline finding is that the more experienced interviewers were no better than the novice interviewers at spotting lying job candidates - the first time that this topic has been researched. Greater work seniority, having more work experience and having more subordinates at work were also unrelated to the ability to spot lying job candidates.

There was a glimmer of hope that interview lie-detection skills could be taught. Participants who reported more correct beliefs about non-verbal cues to lying (e.g. liars don't in fact fidget more) were slightly more successful at recognising which job candidates were lying (each correct belief about a non-verbal cue added 1.2 per cent more accuracy on average). Experienced and novice interviewers in the current study didn't differ in their knowledge about lying cues, which helps explain why the veterans were no better at the task. The more experienced interviewers were however more skeptical overall, tending to rate more of the clips as featuring lying.

"Our results provide the first evidence that employment interviewers may not be better at detecting deception in job interviews than lay persons," the researchers said, "although it is a judgmental context that they are very experienced with."

Although the main gist of the results is consistent with related research in other contexts - for example, studies have found police detectives are no better at spotting lies, despite their interrogation experience - this study has some serious limitations, which undermine the applicability of the findings to the real world. Above all, the study did not involve real interviews, which meant the participants were unable to interact with the interviewees in a dynamic manner.
ResearchBlogging.org

Reinhard, M., Scharmach, M., and Müller, P. (2013). It's not what you are, it's what you know: experience, beliefs, and the detection of deception in employment interviews Journal of Applied Social Psychology, 43 (3), 467-479 DOI: 10.1111/j.1559-1816.2013.01011.x

Monday, 6 May 2013

Wish you were here!" - how a postcard can help attract the best talent

In 2004, in Silicon Valley, Google posted a huge billboard ad featuring a mathematical problem. The answer led to a web address with yet another puzzle to crack. People who successfully followed this intellectual treasure hunt ended up being invited in for a job interview.

This is an extreme example of a recruitment  principle spelled out in a new article by psychologists in Belgium. They say that distinctive recruitment procedures are the secret to attracting more and better job applicants, especially in fields like engineering where competition for the best talent is intense.

Working with a Belgian technology company, Saartje Cromheecke and her colleagues sent out a real job opportunity to 1,997 potential applicants, around half of them via email (as is the industry standard), and half via a hand-written postcard depicting a coffee mug and a blank daily agenda. The email and postcard message featured the same layout and included the same written information and content about the job vacancy.

Sixty-two of the contacted engineers applied for the job - 82% of them had received the postcard, just 18% had received the email. Stated differently, only 1% of the engineers who were emailed actually applied for the job compared with 5% of those who received a postcard. This latter figure represents a high response rate for the field. Moreover, the respondents to the postcard tended to be better educated, consistent with the researchers' prediction that a recruitment message sent via a "strange" medium will be more likely to grab the attention of better-qualified personnel who aren't actively looking for new opportunities.

The researchers said that social cognition research has shown how we adopt mental "scripts" for different aspects of our lives. "... recruiting in a strange way that differs from what competitors are doing is likely to be inconsistent with recruitment scripts," they said, "enhancing potential applicants' attention, attraction, and intention to apply."

It's important to note, Cromheecke's team aren't saying that postcards will always be the answer. Rather, "this field experiment puts forth 'media strangeness' as a more general evidence-based principle, which recruiters might take into account when selecting media for communicating job postings."

This post was written by Christian Jarrett and originally found on the BPS Research Digest blog. 
ResearchBlogging.orgCromheecke, S., Van Hoye, G., and Lievens, F. (2013). Changing things up in recruitment: Effects of a ‘strange’ recruitment medium on applicant pool quantity and quality. Journal of Occupational and Organizational Psychology DOI: 10.1111/joop.12018

Thursday, 31 January 2013

Do test cheats matter if you test enough people?


Over the past decade, the cheapness and convenience of online testing has seen its usage grow tremendously. Its critics raise the openings it makes for cheaters, who might take a test many times under different identities, conspire with past users to identify answers, or even employ a proxy candidate with superior levels of the desired trait. Its defenders point to countertactics, from data forensics to follow-up tests taken in person. But the statistical models employed by researchers Richard Landers and Paul Sackett suggest that in recruitment situations, the loss of validity due to online cheating can be recovered simply due to the greater numbers of applicants able to take the test.

Landers and Sackett point out that test administrators normally intend to select a certain volume of candidates through testing, such as the ten final interviewees. The accessibility factor of online testing could allow you to grow your candidate pool, say from 20 to 50. Considering these numbers, its possible to now select those that scored better than 80% of the other candidates, rather than merely those in the top half. And if some of your candidates cheat, oomphing their scores to the 82nd percentile when they only deserve the 62nd, that's still a better calibre than the 50-or-better you would have been prepared to accept from your smaller face-to-face pool.

Landers and Sackett moved from these first principles to modelling out some realistic large data sets containing a range of true ability scores. They considered sets where cheating gave a small (.5 SD improvement) or large (1 SD) bonus to your test score; against this was another factor, how much your natural ability influenced your likely to cheat, from no relationship, r=0, into increasingly strong negative relationships, from -.25 to -.75, modelling the idea that weaker performers are more likely to cheat. And finally, they varied the prevalence of cheating in increments from zero up to 100%.

The researchers ran simulations in each data set by picking a random subset - the 'candidate pool' - and selecting the half of the pool with better test scores. In the totally honest datasets, the mean genuine ability score of selected candidates was .24. but that value was lower for sets that contained cheaters, as some individuals passed without deserving it. Landers and Sackett then added more candidates into each pool, allowing pickier selection, and reran the process to see what true abilities were obtained. In many data sets the loss of validity due to cheating was easily compensated by growth of applicant pool. For instance, if cheating has only a modest effect and is only mildly related to test ability (r= -.25) then doubling the applicant pool yields you genuine scores of .24 even when 70% of candidates are cheating, and higher scores when the cheaters are fewer in number, such as .31 for 30% cheaters.


Great...but wait. there are two important take-aways relating to fairness. It's true that if we're getting .31 averages instead of .24, our selected candidates should be more job-capable, even some of those who did cheat, and that's a win for whoever's hiring. But in the process we've rejected people who by rights deserved to go through. Essentially, this is a form of test error, and so not a uniquely terrible problem, but it's one we shouldn't become complacent about just because the numbers are in the organisation's favour.

Secondly, and as anyone trained in psychometric use will be aware, increasing selection ratios from top 50% to top 25% is no casual prerequisite. Best practice is that without evidence, such as an inhouse validity study, cut-offs on a single test should be capped at the 40th percentile, meaning you pass 60% of candidates. In particular, raising thresholds can have adverse impact on minority groups, on whom many tests still show differentials (although these are closing over time). As minorities tend to make up a minority of any given applicant pool, such differentials can easily squeeze the diversity out of the process before you even get a chance to sit down with candidates and see what they have to offer in a rounded fashion.

Nevertheless, this paper brings a fresh angle to the issue of test security.


ResearchBlogging.orgLanders, R., & Sackett, P. (2012). Offsetting Performance Losses Due to Cheating in Unproctored Internet-based Testing by Increasing the Applicant Pool International Journal of Selection and Assessment, 20 (2), 220-228 DOI: 10.1111/j.1468-2389.2012.00594.x

Further reading:

Tippins, N. T. (2009). Internet alternatives to traditional proctored testing: Where are we now? Industrial and Organizational Psychology, 2, 2–10.

Tuesday, 20 November 2012

Applicants' voluntary experience is valued by recruiters


Job applicants with experience in voluntary roles may be tempted to report this to their prospective employers. But how favourably do recruiters regard these sorts of experience? Christa Wilkin and Catherine Connelly investigated this in a group of professional recruiters, providing them with CVs (resumes) constructed to differ systematically in the types of experience reported. They suspected that other things being equal, work experience may be favoured more when it comes with a wage, as duration in a paid role implies you have met performance and behavioural standards, whereas voluntary positions tend to lack appraisals and focus more on participation (hours of involvement) than evaluating outcomes. Wilkin and Connelly also predicted that voluntary work would be subject to the same 'relevance' criteria as paid: if it didn't obviously supply skills, knowledge and experience that were pertinent to the targeted job, it wouldn't make them more attractive to the recruiter.

The 135 participants each evaluated eight CVs with a target job in mind, rating each one on a seven point scale in terms of how qualified they seemed for the role. The work experience for four CVs was either entirely voluntary or entirely paid, and either clearly relevant or irrelevant. The other four CVs all had a mix of voluntary and paid work in various combinations (e.g., relevant voluntary and irrelevant paid work). In addition, each recruiter recorded how involved they had personally been in voluntary work, to test the hypothesis that first-hand experience may lead them to attribute more value to this kind of work.

Comparison of voluntary and paid-work CVs showed that the recruiters had no significant preference for paid experience, but did favour relevant experience over irrelevant, regardless of type of employment. A recruiter's background of voluntary work had no influence on their ratings of applicants with voluntary experience. Finally, CVs with a mix of experience were rated more favourably than either pure voluntary or pure paid work. Wilkin and Connelly had predicted this, based on the idea that voluntary work can 'round-out' a career history by showing evidence of traits that may not be illuminated in paid opportunities to date, such as altruism, cooperation, and a work ethic. It provides evidence that a candidate may be a welcome presence, which is especially attractive when coupled with evidence that the candidate can also produce results in an appraised environment.

This study paints an optimistic picture for candidates with volunteering backgrounds. Recruiters tend not to automatically deprecate these types of experiences: they simply care about how the experience is relevant to the application. Moreover, introducing volunteering work as a complement to paid experience can enhance prospects, this appears to be true even when the volunteering is less-relevant, as long as the paid work is relevant, despite the explicit positions of recruiters that this evidence is unlikely to sway their evaluation.

ResearchBlogging.orgWilkin, C., & Connelly, C. (2012). Do I Look Like Someone Who Cares? Recruiters’ Ratings of Applicants’ Paid and Volunteer Experience International Journal of Selection and Assessment, 20 (3), 308-318 DOI: 10.1111/j.1468-2389.2012.00602.x

Friday, 16 November 2012

Can you be coached to better outcomes on a situational judgment test?

The Situational judgment test (SJT), which asks respondents to choose their preferred course of action in a workplace scenario, has become a popular way of assessing fit to attributes of a job or organisational culture. It's used by governments, military, polices forces, and for educational selection such as certification of GPs (medical General Practitioners). Like other popular techniques, it has spawned an industry that promises to help people pass them. Can coaching enhance performance on such a test?

Filip Lievens and his team examined this in a real-world context - laboratory studies can lack the motivation to learn that drives coaching's benefits - in the form of August admissions to a Belgian medical school, where candidates take a battery of assessments including an SJT. A challenge is that candidates who seek coaching may differ from their counterparts in ways that could influence their eventual performance, independent of the effect of the coaching itself. Lievens' team addressed this through two routes. Firstly, they used a form of matching called propensity scoring, by which every coached candidate is matched against an uncoached one through deriving scores based on a range of individual factors, including demographic background, career aspirations, previous academic performance, and their tendency to prepare through other means, such as practice tests. Secondly, the team only included candidates who had previously failed the assessments in July, and had not engaged in any coaching prior to July. This meant that the July SJT performance could act as a pre-test measure of how candidates did before coaching was introduced. From a larger sample, Lieven's team ended up with 356 matched candidates that fit the stringent criteria.

Merely examining the August performance, it appeared that coaching did have an effect: matched candidates scored an average of 1.5 points higher, with an effect size of around .3. But by comparing the difference scores of how much candidates improved between July and August, the team found that coached candidates improved by 2.5 points more than uncoached, for an effect size of around .5. This is because the candidates who decided to receive coaching on average had been weaker performers the first time around - possibly one reason they invested in assistance. This effect size is fairly large - a boost of half a standard deviation - especially compared to those for coaching in cognitive tests, which fall between .1-.15.

SJTs are popular with candidates, being intuitive and overtly job-relevant. Employers are also fans: SJTs are strongly predictive of relevant job performance, with incremental value over and above that supplied by ability tests, and have less adverse impact, with demographic groups typically showing small average differences in performance. But this evidence suggests that their results can be influenced by coaching. Does the coaching result in an increase in the underlying ability? It may do, but programs tend to focus on 'teaching to the test' rather than broader ability, meaning results may be distorted. The researchers suggest this needs to be investigated, and that test developers explore different scoring techniques and broaden the attributes assessed by SJTs to make them difficult to exploit.

ResearchBlogging.orgLievens, F., Buyse, T., Sackett, P., & Connelly, B. (2012). The Effects of Coaching on Situational Judgment Tests in High-stakes Selection International Journal of Selection and Assessment, 20 (3), 272-282 DOI: 10.1111/j.1468-2389.2012.00599.x

Friday, 15 June 2012

Why do job applicants behave the way they do?


Truth, lies and rolling dice. Not a Vegas weekend, but new research looking at applicant self-presentation: how individuals use behaviours to give a favourable account of themselves in job selection situations. We might call it faking, but are applicants just doing what recruiters expect of them?

The researchers, Anne Jansen and colleagues, drew on 53 recruiters (HR professionals)  from a range of Swiss companies, and two  adult student groups representing applicants (416 Masters students, replicated with 88 vocational apprentices). Both recruiters and applicants were presented with a set of self-presentation behaviours, such as "When applying for the job, I praised the organization" or "When applying for the job, I claimed to have experience that I didn’t actually have".

Recruiters were asked how appropriate the behaviours were, and agreement between their responses was high, strongly sharing expectations for half of the behaviours, and moderate agreement for virtually all the remaining. Collectively, they saw some behaviours, such as describing skills or knowledge, as appropriate and uncontroversial, with others definitely inappropriate, such as fabricating details, and still others, strategic ploys such as de-emphasising negative attributes, fell in between. This shared set of norms is what the research team expected, creating a job selection 'situational script' that recruiters expect to be followed. Did the applicants do so?

Enter the dice. Afraid of being tarred a faker, people are reluctant to admit to self-presentation, even for supposedly confidential, anonymous research. To address this, the applicants gave responses using the randomised response technique, which asked them only to reply truthfully to an item if they rolled a three or greater on a playing die - otherwise, they must respond affirmatively, regardless of the truth. This makes individual profiles impossible to identify whilst the aggregate data remains analysable, by looking at how responses differ from the base rate.

Jansen's team examined this data using correlation to compare frequency of applicant behaviour to recruiter judgement of that behaviour; they found high correlations at well above .8 (.9 in the larger Masters sample). The frequency of a self-presentation behaviour was strongly related to whether it was something that recruiters saw as acceptable.

The authors see this as the inevitable outcome of a 'strong situation', with right or wrong ways to behave - the shared attitude of the recruiters - where applicants are just trying to follow that script and do what they are 'supposed to', as learned from advice, previous experience, websites, or tacit feedback from the recruiter. Jansen and her colleagues conclude that common reactions to self-presentation behaviours, such as  moral condemnation or celebration as a social skill (not dissimilar to the concept of 'ability to identify criteria'), may be attempts to conjure individual qualities from what is mainly a situational phenomena. Conversely, it seems to me that, as understanding an individual's qualities is so useful in job selection, we would do well to experiment with meeting candidates in weaker, ambiguous situations with no right way to behave, to let them slide off-script and see the real them.

ResearchBlogging.orgJansen, A., König, C., Stadelmann, E., & Kleinmann, M. (2012). Applicants’ Self-Presentational Behavior Journal of Personnel Psychology, 11 (2), 77-85 DOI: 10.1027/1866-5888/a000046

Friday, 20 January 2012

2012 Resolution: attract and keep the right people for your workplace




Getting people in

It's all very well having the best methods of selection, but you need to get motivated, capable and well-fitted people interested in working with you.




1. Cultivate a good word-of-mouth reputation to attract highly educated graduates. So treat existing employees well and avoid allegations of hypocrisy by ensuring your internal culture fits with your external brand.  The received wisdom of 'campus presence' turns out to be on rather flimsier ground  (it may even be counterproductive for world-wise candidates), but the evidence is that people trust word-of-mouth.

2. Ensure online recruitment materials reveal the diversity within the office. There's evidence that both black and white applicants are more likely to peruse sites that present images of diversity, treating it as a marker of merit. Of course, this doesn't mean misleading applicants as to the true nature of your workplace!

3. Treat your intake of young  workers as you do graduates: as an investment in the future. Many industries rely heavily on young workers, and experts argue we should take this work more seriously, offering better working conditions, access to training and recognising good performance. That way, those who thrive will recommend their workplace to their social circles, reducing churn costs, and may themselves stay with the company into adulthood, or return after studies.

Keeping people sweet

We're living in an era of unprecedented attention to the notion of wellbeing, satisfaction and happiness. Even if we believe that material conditions are primary – for instance, that money buys you happiness – there are undoubtedly other measure we can take to better conditions in the workplace, and here the psychological literature can really help.

4. Explore whether your older employees are hankering after managerial responsibilities. Employees older than 45 have a stronger preference to supervise others than their younger colleagues. Of course, "want to" does not equate to "should", but such preferences are likely to drive engagement, so it's unwise to ignore them, especially in a workforce, which, at least in the first world, is ageing at an unprecedented rate.

5. Take up volunteering. An unexpected resolution? People who volunteer time out of work gain benefits they carry into the following working day. Actually, it shouldn't surprise: volunteering epitomises many of the evidence-based five ways to wellbeing, including giving, connecting to others, and (often) a degree of physical activity.

6. Experiment with focused breaks to enhance health and energy at work. Maintaining our health at work allows us to function better and avoid illness, stress and burnout. So you may want to explore the idea of packaging activities such as mindfulness meditation, breathing exercises or physical activity into bite-size packages during the working day.

However....

There is a potential dark side to a focus on enjoyment on the workplace. As outlined in this article, emphasis on "fun" can end up being inauthentic, pressurise everyone into the same mould, and draw young workers into unhealthy dependency on their employer as the source of their social support as well as income. So stand up to cynical uses of fun and socialising in the workplace (7).

Friday, 13 January 2012

2012 resolution: make better selection decisions


A simple resolution, but how to go about it?


1. Review practices to align with your organisation's unique context. As a whole, companies using 'best practice' approaches such as ability tests, structured interviews and monitoring recruitment sources do no better on aggregate than those who don't use these methods. This tells us that it isn't about slavishly following a right formula, but evaluating what's been proven to work elsewhere with your understanding of the local context of your organisation. So consider the below recommendations in this light.

2. Consider introducing well-designed, low effort assessments. There is research to suggest that automated assessments such as tests of knowledge or situational judgement, when well-designed, can do the job virtually as well as more intensive face-to-face assessment. Again, this will depend on your organisation and industry, but it may bear exploring for you.

3. Develop a policy on checking out job applicants online. Recruiters can find it tempting to google applicants or peruse them on social networking sites, getting free, quickly accessible, and otherwise hidden information about them. But there are questions about its fairness, risk of generating feelings of invasiveness, and possibility that it leads to decisions being made that aren't defensible. It's probably already going on in your organisation, so establish some ground rules for how you approach it.

4. Provide focused training to people who play roles in assessment simulations. In particular, evidence suggests focused training helps role players to introduce pre-determined prompts to nudge candidates into showing (or failing to show) critical behaviours; it appears that this may lead to more accurate ratings in some areas.

5. Be realistic about what you are actually measuring. Interview overall scores are strongly influenced by the picture gained from the early minutes where rapport is built. Happily, it seems that this isn't simply bias, but reflects some good information picked up - for instance, verbal ability, and some personality factors. Why not recognise this, perhaps by assigning quick ratings after that initial period.

Meanwhile, and more alarmingly, some researchers suggest that assessments scores of all kinds are heavily influenced by a personal attribute called 'ability to identify criteria'. Again, ATIC does seem to be a good predictor of workplace success in itself, but in both these examples the point is the risks when we assume we are measuring one thing - e.g., the competency "Leading for Success" when in fact we are measuring another.

And finally....

When I decided to exit research and enter bleary-eyed into the Real World(TM) I was concerned that having a PhD might be a disadvantage. Things turned out ok to me, but it turns out my feelings are well-founded: recruiters do see overqualification as a potential reason not to employ someone. Yet there are a host of reasons why overqualified applicants may be a great add to your organisation. So reconsider how you approach overqualified candidates.

Tuesday, 8 November 2011

Extreme numbers influence initial salary offers



Despite some schools of thought, it's generally to your advantage to name a price first in negotiations. This is thanks to the anchoring effect, where presenting a value skews later judgments towards it.  There is plenty of evidence that setting salary for a new role is influenced by relevant anchors, such as the applicant stating their previous pay or expectations for this job. But decision-making research suggests that estimates and attributions can be influenced by even arbitrary and extreme anchors. Todd Thorsteinson at the University of Idaho set about seeing how crazy numbers might also shape take-home pay.

206 psychology students were asked to make a salary suggestion for a desirable job applicant question. Participants were presented with the applicant's description including two anchors: a realistic one of the applicant's previous salary ($29,000), and an unusual one of either $100k or $1, embedded within a joking statement they made about their salary expectations. The joking context was considered necessary to allow the unusual anchor to be presented without triggering other effects, like being considered overly arrogant or having poor judgment. Participants given the high unusual anchor awarded a higher salary than both those given the low unusual anchor and a control condition with just the realistic anchor.

A second experiment asked its 150 participants to additionally record their perceptions when reading about the applicant, and introduced an even more extreme anchor: one million dollars. Participants were not put off by the extreme anchor, perceiving it as just as plausible and influential as  the $100k reference, and in both cases ended up offering the applicant a higher salary than when these high anchors were absent. So, just as in the literature on estimation, even radically inappropriate anchors can sway decisions. It's worth noting too that the unusual anchors had their effect despite being presented alongside realistic ones, as some studies have suggested that in such situations we may simply defer to the more plausible. That wasn't the case here.

There are risks to naming a salary first, such as underselling yourself or pricking the sensibilities of the hirer. So using a joke to introduce an anchoring value may be a safer bet. Organisations may of course respond: using clearly defined pay ranges and clear criteria to shape a fair financial offer for a desired candidate. Both parties should take seriously the power of framing the financial borders of a negotiation.


ResearchBlogging.orgTHORSTEINSON, T. (2011). Initiating Salary Discussions With an Extreme Request: Anchoring Effects on Initial Salary Offers1 Journal of Applied Social Psychology, 41 (7), 1774-1792 DOI: 10.1111/j.1559-1816.2011.00779.x

Thursday, 27 October 2011

Black and white applicants more engaged by diversity-friendly recruitment websites

Organisations don't make recruitment websites for their own gratification, but to attract applicants. Ideally, they want informed ones who've gathered a realistic sense of whether the organisation is for them. So recruiters should take note: a recent study has shown that sites that present cues of racial diversity encourage both black and white applicants to browse for longer and encode more information about the organisation.

H. Jack Walker and colleagues had expected that racial diversity cues such as images and testimonials would appeal to black applicants, by indicating that the organisation was sympathetic to their identity. Rather than just surveying attitudes, the team went beyond previous studies by looking at what applicants did during and remembered following site browsing.

In a first study, 141 students evaluated a website of a fictional website, which under one condition included a diversity cue - two of four company representatives on the "Meet Our People" page were black - whereas under the other condition all four reps were white. A second study increased real-world validity by asking 73 students to make judgements about two genuine company sites with high or low diversity cues.

In both studies, the black students (around a third of each sample) were able to recall more details about the organisation when tested two to three weeks after when they had been browsing a website containing strong diversity cues. The first study measured browsing time too, and found the black students spent more time on those websites. But all this was also true of the white students: the effects were slightly less pronounced - there was an interaction between presence of cue and applicant race - but they were there nonetheless.

Straight off, I should emphasise that use of diversity cues needs to be sincere: misselling an organisation as diversity friendly is a clear recipe for disaster for applicant and employer alike. With that in mind, there would be ample reason to put sincere diversity cues in recruitment websites even if the effect had been limited to black applicants. Even neglecting the wider social effects, increasing diversity in an organisation widens its talent pool, can improve its performance and makes it more attractive to a broader customer base. But the current study suggests that for black and white applicants, sites containing such cues "are more likely to maintain applicant interest so that website viewers evaluate and retain more website information". In a world of short attention spans, that's got to be worth a lot.

ResearchBlogging.orgWalker, H., Feild, H., Bernerth, J., & Becton, J. (2011). Diversity cues on recruitment websites: Investigating the effects on job seekers' information processing. Journal of Applied Psychology DOI: 10.1037/a0025847

Friday, 16 September 2011

Can we get away with using lo-fi assessment to recruit advanced positions?

In recruitment, the promise of comparable results for less effort is understandably tempting. It's offered by the offsetting of costly assessments with alternative measures that use pencils, screens and standardised questions instead of expert assessors. However, as some sources suggest a bad hire can cost twice or more that position's annual salary, the stakes are high. A new study kicks some assessment tyres to see whether that bargain is actually a banger.

Researchers Filip Lievens and Fiona Patterson looked at recruitment into advanced roles which typically seek the skills and knowledge to hit the ground running. They took their sample of 196 successful candidates from the UK selection process for General Practitioners in medicine (GPs). To get here, you've completed two years of basic training and up to six years of prior education, by which stage you're after someone ready to go, not a future 'bright star'. Lievens and Patterson were specifically interested in how much assessment fidelity matters, meaning the extent to which assessment task and context mirror that in the actual job.

Three types of assessment were involved, all designed by experienced doctors with assistance from assessment psychologists. Written tests assessed declarative knowledge through diagnostic dilemmas such as “a 75-year-old man, who is a heavy smoker, with a blood pressure of 170/105, complains of floaters in the left eye”. Assessment centre (AC) simulations meanwhile probe skills and behaviours in an open-ended, live situation such as emulating a patient consultation; these tend to be more powerful predictors of job performance, but are costly.

The third was the situational judgement test (SJT), a pencil and paper assessment where candidates select actions in response to situations, such as a senior colleague making a non-ideal prescription. SJTs are considered by many to be “low-fidelity simulations”, losing their open-endedness and embodied qualities, but hanging on to the what-would-you-do-if? focus. The authors were interested in whether its predictive power would be in the same class as the AC simulations, or mirror the more modest validity of its pencil and paper counterpart.

The data showed that all assessments were useful predictors of job performance, as measured by supervisors after a year spent in role. Both types of simulation - AC and SJT - provided additional insight over and above that given by the rather disembodied knowledge test – each explaining about a further 6% of the variance. But in comparison with each other, the simulations were difficult to tell apart, with no significant difference in how well they predicted performance.

It should be noted that the AC simulations did capture some variance over and above the SJT, notably relating to non-cognitive aspects of job performance, such as empathy, which is important as such areas are less trainable than clinical expertise. However, this extra insight was fairly modest, just a few percentage points of variance. More expensive AC assessments can provide additional value, but the study suggests that at least in this specific recruitment domain, you can get away with a loss of fidelity if the assessments are appropriately designed.

ResearchBlogging.orgLievens, F., & Patterson, F. (2011). The validity and incremental validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations for predicting job performance in advanced-level high-stakes selection. Journal of Applied Psychology, 96 (5), 927-940 DOI: 10.1037/a0023496

Monday, 4 July 2011

When self-promoting won't help you get a job offer

Impression management is a tactic often used by interviewees hoping to boost their chances of getting the job. One common tack is self-promotion: emphasising your successes and attributing them to your personal qualities rather than to context or good luck. Research shows this is generally a sound strategy. But not always; a team from the University of Neuchâtel, Switzerland has shown this is conditional on the culture that your recruiter comes from.

Marianne Schmid Mast and her team gathered 84 recruiters - HR directors, assistants, and recruitment experts – to review a video interview and express how likely they would be to take on the candidate. Half of the recruiters saw a video where the actor used self-promotion heavily: he attributed successes to internal factors and failures to external ones, and used a quick fluent speech style, with plenty of eye contact and relaxed posture. As an example, he used statements like “I think that I am excellent in everything I do”, which makes me think I saw him on The Apprentice a while back.

The other participants saw the actor in modest mode, making the opposite type of attributions, peppering their speech with pauses and disclaimers like “I'm not sure”, and sitting tensely while fidgeting. Unsurprisingly, the participants rated the actor significantly differently in each condition on measures of modesty and self-promotion – the latter pleasingly including a component of 'pretentiousness'. The bare facts of the situation remained unchanged in each script, making the candidate equally prepared for the technical demands of the job in both cases.

Overall, the self-promoting candidate received higher ratings of likelihood of hiring, in line with previous work. But there was a further layer to the study: participants had been gathered from two different countries, Switzerland, which is characterised by features such as diplomacy and modesty, and Canada, which is an 'Anglo' culture composed of people likely to consider themselves as unique, proactive, and forceful. The Canadians were enthusiastic for the self-promoter, on average showing a 54% likelihood of hiring him, compared to 21% for the modest candidate. But the Swiss, generally less eager to hire, were only 29% likely to hire the self-promoter, similar to their 24% ratings for the modest candidate.

The recruiters may have shared a language (French) but were divided by their culture in how they responded to self-promotion, valuing it less if it was discordant with their own norms. This has relevance for two groups: firstly, candidates should consider cultural context before committing to specific impression management tactics. Secondly, organisations that recruit globally should consider that recruitment in one country may be driven by culturally-desired qualities that don't translate to the country where the applicant may end up. The study videos used recommended 'behavioural interview' questioning, yet still these discrepancies were found, suggesting that organisations should ensure a shared sense of what 'good' looks like in candidate style.

ResearchBlogging.orgSchmid Mast, M., Frauendorfer, D., & Popovic, L. (2011). Self-Promoting and Modest Job Applicants in Different Cultures Journal of Personnel Psychology, 10 (2), 70-77 DOI: 10.1027/1866-5888/a000034

Wednesday, 29 June 2011

Best practices may not be best for your organisation

If your organisation puts time and effort into implementing best practise HR methods, such as ability testing, it must be reassuring to to know it all pays off in the end. Or does it? A recent study involving US financial organisations casts doubt on this belief.

Oksana Drogan and George Yancey were interested in six recruitment technologies generally considered as 'best practice': job analysis to see what a candidate needs to perform well; monitoring the effectiveness of recruitment sources; using ability tests; structuring interviews; using validation studies to establish whether recruitment performance translates to job performance; and using BIB/WABs, different forms of scoreable application forms (SAFs in the UK).

There is already much research on these areas at an individual level. For example, it's well-evidenced that when ability tests are well-designed and appropriate to the job they can predict aspects of individual job performance. But Drogan and Yancey were curious about organisational outcomes: in their case, financial success. Evidence is thinner and equivocal in this domain, so they decided to conduct a fresh investigation to see how these individual promises fare at the organisational level – do they cash out, or do the cheques bounce?

The researchers contacted HR executives from various credit unions across the US and surveyed the 122 respondents on whether they used each of the six practices, giving each organisation an 0-6 overall score. They also gathered publicly available financial data on each credit union, rendered into different measures such as market share growth; a quick review confirms a fair variety in financial performance across the organisations.

However, that variety was not down to the practices used. Firstly, the overall score did not correlate with any of the financial measures. Secondly, on any given measure, the financial success of companies that employed it was no better than that of those who did not. Neither was there any sense of a bedding-in period, with practices becoming more effective over years of use: such an effect was found for only one practice (validation) with just a single financial measure.

The authors conclude that “increasing the technical sophistication of selection procedures alone is not sufficient to influence bottom line results.” They point to other priorities that HR can take: aligning procedures to the unique features of the organisation, or taking an integral approach that recognises that investment in recruitment may be ineffective if this doesn't tie in with how you train new employees. In other words, use a procedure because it's useful here, now, for you, not because it's trumpeted as Best Practice.

ResearchBlogging.orgDrogan, O., & Yancey, G. (2011). Financial Utility of Best Employee Selection Practices at Organizational Level of Performance The Psychologist-Manager Journal, 14 (1), 52-69 DOI: 10.1080/10887156.2011.546194

Friday, 20 May 2011

Hiring by online profile: perils and challenges for the networked recruiter

(This post forms part of this month's focus on younger people in the workforce.)

Whether it's holiday snaps, opinions, or your work history, it's likely that you use a social network site (SNS) to express some things about you. This is especially true for the young; membership of Facebook, the largest SNS, continues to show a skew towards ages twenty and under. It's unsurprising that recruiters might use these sites to find out more about job applicants; a 2009 poll indicates 45% of 2600 hiring managers polled had done just that. Now, a new paper by Victoria Brown and E. Daly Vaughn surveys the risks and consequences of allowing online discoveries to influence hiring decisions.

The attractions are clear: recruiters get free, quickly accessible, and otherwise hidden information about applicants. The 2009 poll suggests that 35% of the managers rejected candidates due to SNS evidence, such as unwanted habits or information that contradicts their resume. The evidence can also support candidates by corroborating resumes; employment-centred sites such as LinkedIn exist partly to perform that function.

The first issue Brown and Vaughn raise is perceived invasiveness: trawling through individual's profiles (and those of their friends, just a few clicks away) can feel like snooping. By harming the candidate's recruitment experience, now recognised as a valuable 'pre-onboarding' phase, this can undermine relations once in post.

Secondly, is it fair? An SNS user who shares freely may be sifted out in favour of a counterpart who is cannier at selecting settings, but no better at the job. Moreover, many SNS's detail non-work behaviour, and generalising from here to the workplace may be unwarranted. We can also fall prey to drawing conclusions on the bases of a small sample of 'recent activity'.

Most importantly, the observed behaviours must relate to job criteria to be justifiable for use in employment decisions. An appropriate case would be assessing uploaded images created by a graphic designer, to establish the breadth and quality of their output. But in other cases, information has to be tied to some higher-order construct.

The good news is that some evidence exists that we can construe personality reasonably well on the basis of SNS profiles. But for areas such as verbal communication, we don't have that evidence. (Personally I'm happy to lapse into Facebook patois when I'm on-site. Sincerely sharing communication conventions, or ironically playing at it? Like the Simpsons, I don't even know any more.) The authors also worry that SNS screening may be very prone to biases, given that SNS data gives ready indication of race, age, disability and other factors that shouldn't be considerations in screening decisions.

The authors suggest organisations should develop policies on SNS use in hiring. They recommend forbidding opportunistic online reviewing of some candidates but not others, and listing appropriate criteria, with standardised rubrics that can be used to evaluate candidates. Even then, where there is no clear evidence legitimising decisions, the authors suggest it may be better for organisations to ban the practice entirely.

ResearchBlogging.orgBrown, V., & Vaughn, E. (2011). The Writing on the (Facebook) Wall: The Use of Social Networking Sites in Hiring Decisions Journal of Business and Psychology DOI: 10.1007/s10869-011-9221-x