Showing posts with label interview. Show all posts
Showing posts with label interview. Show all posts

Thursday, 16 May 2013

Experienced job interviewers are no better than novices at spotting lying candidates




This post was written by Christian Jarrett and originally found on the BPS Research Digest blog.
 
For the penultimate round of the TV show The Apprentice, the competing entrepreneurs must face a series of interviews with a crack team of hardened executives. The implicit, believable message is that these veterans have seen all the interview tricks in the book and will spot any blaggers a mile off. However, a new study provides the reality TV show with a reality check. A team led by Marc-André Reinhard report that experienced job interviewers are in fact no better than novice interviewers at spotting when a candidate is lying.

The researchers filmed 14 volunteers telling the truth about a job they'd really had in the past and then spinning a yarn about time in a job they'd never really had. The volunteers were offered a small monetary reward to boost their motivation. These clips were then played online to 46 highly experienced interviewers (they'd conducted between 21 and 1000 real-life job interviews), 92 interviewers with some experience (they'd interviewed at least once), and 214 students who'd never before acted as a job interviewer. The participants' task was to identify the clips in which the interviewee was speaking truthfully about their work experience, and the clips in which the interviewee was fabricating.

Overall the participants achieved an accuracy rate of 52 per cent - barely above chance performance, which is consistent with a huge literature showing how poor most of us are at spotting deception. But the headline finding is that the more experienced interviewers were no better than the novice interviewers at spotting lying job candidates - the first time that this topic has been researched. Greater work seniority, having more work experience and having more subordinates at work were also unrelated to the ability to spot lying job candidates.

There was a glimmer of hope that interview lie-detection skills could be taught. Participants who reported more correct beliefs about non-verbal cues to lying (e.g. liars don't in fact fidget more) were slightly more successful at recognising which job candidates were lying (each correct belief about a non-verbal cue added 1.2 per cent more accuracy on average). Experienced and novice interviewers in the current study didn't differ in their knowledge about lying cues, which helps explain why the veterans were no better at the task. The more experienced interviewers were however more skeptical overall, tending to rate more of the clips as featuring lying.

"Our results provide the first evidence that employment interviewers may not be better at detecting deception in job interviews than lay persons," the researchers said, "although it is a judgmental context that they are very experienced with."

Although the main gist of the results is consistent with related research in other contexts - for example, studies have found police detectives are no better at spotting lies, despite their interrogation experience - this study has some serious limitations, which undermine the applicability of the findings to the real world. Above all, the study did not involve real interviews, which meant the participants were unable to interact with the interviewees in a dynamic manner.
ResearchBlogging.org

Reinhard, M., Scharmach, M., and Müller, P. (2013). It's not what you are, it's what you know: experience, beliefs, and the detection of deception in employment interviews Journal of Applied Social Psychology, 43 (3), 467-479 DOI: 10.1111/j.1559-1816.2013.01011.x

Friday, 15 June 2012

Why do job applicants behave the way they do?


Truth, lies and rolling dice. Not a Vegas weekend, but new research looking at applicant self-presentation: how individuals use behaviours to give a favourable account of themselves in job selection situations. We might call it faking, but are applicants just doing what recruiters expect of them?

The researchers, Anne Jansen and colleagues, drew on 53 recruiters (HR professionals)  from a range of Swiss companies, and two  adult student groups representing applicants (416 Masters students, replicated with 88 vocational apprentices). Both recruiters and applicants were presented with a set of self-presentation behaviours, such as "When applying for the job, I praised the organization" or "When applying for the job, I claimed to have experience that I didn’t actually have".

Recruiters were asked how appropriate the behaviours were, and agreement between their responses was high, strongly sharing expectations for half of the behaviours, and moderate agreement for virtually all the remaining. Collectively, they saw some behaviours, such as describing skills or knowledge, as appropriate and uncontroversial, with others definitely inappropriate, such as fabricating details, and still others, strategic ploys such as de-emphasising negative attributes, fell in between. This shared set of norms is what the research team expected, creating a job selection 'situational script' that recruiters expect to be followed. Did the applicants do so?

Enter the dice. Afraid of being tarred a faker, people are reluctant to admit to self-presentation, even for supposedly confidential, anonymous research. To address this, the applicants gave responses using the randomised response technique, which asked them only to reply truthfully to an item if they rolled a three or greater on a playing die - otherwise, they must respond affirmatively, regardless of the truth. This makes individual profiles impossible to identify whilst the aggregate data remains analysable, by looking at how responses differ from the base rate.

Jansen's team examined this data using correlation to compare frequency of applicant behaviour to recruiter judgement of that behaviour; they found high correlations at well above .8 (.9 in the larger Masters sample). The frequency of a self-presentation behaviour was strongly related to whether it was something that recruiters saw as acceptable.

The authors see this as the inevitable outcome of a 'strong situation', with right or wrong ways to behave - the shared attitude of the recruiters - where applicants are just trying to follow that script and do what they are 'supposed to', as learned from advice, previous experience, websites, or tacit feedback from the recruiter. Jansen and her colleagues conclude that common reactions to self-presentation behaviours, such as  moral condemnation or celebration as a social skill (not dissimilar to the concept of 'ability to identify criteria'), may be attempts to conjure individual qualities from what is mainly a situational phenomena. Conversely, it seems to me that, as understanding an individual's qualities is so useful in job selection, we would do well to experiment with meeting candidates in weaker, ambiguous situations with no right way to behave, to let them slide off-script and see the real them.

ResearchBlogging.orgJansen, A., König, C., Stadelmann, E., & Kleinmann, M. (2012). Applicants’ Self-Presentational Behavior Journal of Personnel Psychology, 11 (2), 77-85 DOI: 10.1027/1866-5888/a000046

Monday, 29 August 2011

Are job selection methods actually measuring 'ability to identify criteria'?



While we know that modern selection procedures such as ability tests and structured interviews are successful in predicting job performance, it's much less clear how they pull off those predictions. The occupational psychology process – and thus our belief system of how things work - is essentially a) identify what the job needs b) distil this to measurable dimensions c) assess performance on your dimensions. But a recent review article by Martin Kleinman and colleagues suggests that in some cases, we may largely be assessing something else: the “ability to identify criteria”.

The review unpacks a field of research that recognises that people aren't passive when being assessed. Candidates try to squirrel out what they are being asked to do, or even who they are being asked to be, and funnel their energies towards that. When the situation is ambiguous, a so-called “weak” situation, those better at squirrelling – those with high “ability to identify criteria” (ATIC) - will put on the right performance, and those that are worse will put on Peer Gynt for the panto crowd.

Some people are better at guessing what an assessment is measuring than others, so in itself ATIC is a real phenomenon. And the research shows that higher ATIC scores are associated with higher overall assessment performance, and better scores specifically on the dimensions they correctly guess. ATIC clearly has a 'figuring-out' element, so we might suspect its effects are an artefact of it being strongly associated with cognitive ability, itself associated with better performance in many types of assessment. But if anything the evidence works the other way. ATIC has an effect over and above cognitive ability, and it seems possible that cognitive ability buffs assessment scores mainly due to its contribution to the ATIC effect.

In a recent study, ATIC, assessment performance, and candidate job performance were examined within a single selection scenario. Remarkably it found that job performance correlated better with ATIC than it did with the assessment scores themselves. In fact, the relationship between assessment scores and job performance became insignificant after controlling for ATIC. This offers the provocative possibility that the main reason assessments are useful is as a window into ATIC, which the authors consider “the cognitive component of social competence in selection situations”. After all, many modern jobs, particularly managerial ones, depend upon figuring out what a social situation demands of you.

So what to make of this, especially if you are an assessment practitioner? We must be realistic about what we are really assessing, which in no small part is 'figuring out the rules of the game'. If you're unhappy about that, there's a simple way to wipe out the ATIC effect: making the assessed dimensions transparent, turning the weak situation into a strong, unambiguous one. Losing the contamination of ATIC leads to more accurate measures of the individual dimensions you decided were important. But overall your prediction of job performance measures will be weaker, because you've lost the ATIC factor which does genuinely seem to matter. And while no-one is suggesting that it is all that matters in the job, it may be the aspect of work that assessments are best positioned to pick up.

ResearchBlogging.orgKleinmann, M., Ingold, P., Lievens, F., Jansen, A., Melchers, K., & Konig, C. (2011). A different look at why selection procedures work: The role of candidates' ability to identify criteria Organizational Psychology Review, 1 (2), 128-146 DOI: 10.1177/2041386610387000

Tuesday, 19 July 2011

Interview decisions are influenced by initial rapport

Research last year demonstrated that interviewees are judged according to their early rapport with the interviewer, even when a highly structured interview format is followed. The same team have now put this finding to the replication test and dug deeper into its causes.

Murray Barrick and colleagues gathered 135 student volunteers keen to improve their interview skill, and put each through two interviews with different interviewers from a pool of business professionals. Each interview proper was firmly structured with predefined questions on competency areas, but commenced with a few minutes of unstructured rapport building. Each interviewee was rated in terms of initial impressions just after the rapport stage, and their interview responses evaluated at the end of the interview. Just as in the 2010 study, the early impressions and final interview ratings strongly correlated.

The judgements we form from first impressions are rarely arbitrary but capture information about the other person, so it's possible the influence of pre-interview rapport isn't sheer bias. Through personality testing, Barrick's team found that first impressions were strongly related to interviewee extraversion, emotional stability, agreeableness, and conscientiousness. Conscientiousness is generally associated with better job performance, and tied into several of the study competencies such as 'work ethic' and 'drive for results'. The other traits, while not necessarily desirable in all roles, can appear attractive qualities in a prospective organisational member.

Initial impressions also correlated with volunteers' self-perception of how qualified they were for the job, and also with an independent measure of verbal skill. The latter was assessed through a separate task where the volunteers interacted face-to-face with a series of peers who rated features such as articulacy of speech. These findings suggest that the rapport-building stage was giving early insight into some sense of perceived fit to the specific role, as well as genuine candidate ability, in addition to personality factors. By careful analysis, the researchers found that all of these factors influenced the final interview ratings, and that this was due to the way they shaped first impressions: after those first few minutes, there was little extra influence of these qualities across the rest of the interview.

As social animals we're reluctant to do away with rapport altogether, and impressions can form even in snatches of seconds. The researchers suggest – with the caveat of more research - that interviewers may as well embrace the first impression, explicitly evaluating some relevant criteria, such as those identified in this study, once the rapport stage is over. And candidates shouldn't unduly panic: this study reveals that the first impression is partly down to an accurate appraisal of some of your true qualities, things you can't do very much about.

ResearchBlogging.orgBarrick, M., Dustin, S., Giluk, T., Stewart, G., Shaffer, J., & Swider, B. (2011). Candidate characteristics driving initial impressions during rapport building: Implications for employment interview validity Journal of Occupational and Organizational Psychology DOI: 10.1111/j.2044-8325.2011.02036.x

Monday, 4 July 2011

When self-promoting won't help you get a job offer

Impression management is a tactic often used by interviewees hoping to boost their chances of getting the job. One common tack is self-promotion: emphasising your successes and attributing them to your personal qualities rather than to context or good luck. Research shows this is generally a sound strategy. But not always; a team from the University of Neuchâtel, Switzerland has shown this is conditional on the culture that your recruiter comes from.

Marianne Schmid Mast and her team gathered 84 recruiters - HR directors, assistants, and recruitment experts – to review a video interview and express how likely they would be to take on the candidate. Half of the recruiters saw a video where the actor used self-promotion heavily: he attributed successes to internal factors and failures to external ones, and used a quick fluent speech style, with plenty of eye contact and relaxed posture. As an example, he used statements like “I think that I am excellent in everything I do”, which makes me think I saw him on The Apprentice a while back.

The other participants saw the actor in modest mode, making the opposite type of attributions, peppering their speech with pauses and disclaimers like “I'm not sure”, and sitting tensely while fidgeting. Unsurprisingly, the participants rated the actor significantly differently in each condition on measures of modesty and self-promotion – the latter pleasingly including a component of 'pretentiousness'. The bare facts of the situation remained unchanged in each script, making the candidate equally prepared for the technical demands of the job in both cases.

Overall, the self-promoting candidate received higher ratings of likelihood of hiring, in line with previous work. But there was a further layer to the study: participants had been gathered from two different countries, Switzerland, which is characterised by features such as diplomacy and modesty, and Canada, which is an 'Anglo' culture composed of people likely to consider themselves as unique, proactive, and forceful. The Canadians were enthusiastic for the self-promoter, on average showing a 54% likelihood of hiring him, compared to 21% for the modest candidate. But the Swiss, generally less eager to hire, were only 29% likely to hire the self-promoter, similar to their 24% ratings for the modest candidate.

The recruiters may have shared a language (French) but were divided by their culture in how they responded to self-promotion, valuing it less if it was discordant with their own norms. This has relevance for two groups: firstly, candidates should consider cultural context before committing to specific impression management tactics. Secondly, organisations that recruit globally should consider that recruitment in one country may be driven by culturally-desired qualities that don't translate to the country where the applicant may end up. The study videos used recommended 'behavioural interview' questioning, yet still these discrepancies were found, suggesting that organisations should ensure a shared sense of what 'good' looks like in candidate style.

ResearchBlogging.orgSchmid Mast, M., Frauendorfer, D., & Popovic, L. (2011). Self-Promoting and Modest Job Applicants in Different Cultures Journal of Personnel Psychology, 10 (2), 70-77 DOI: 10.1027/1866-5888/a000034

Wednesday, 29 June 2011

Best practices may not be best for your organisation

If your organisation puts time and effort into implementing best practise HR methods, such as ability testing, it must be reassuring to to know it all pays off in the end. Or does it? A recent study involving US financial organisations casts doubt on this belief.

Oksana Drogan and George Yancey were interested in six recruitment technologies generally considered as 'best practice': job analysis to see what a candidate needs to perform well; monitoring the effectiveness of recruitment sources; using ability tests; structuring interviews; using validation studies to establish whether recruitment performance translates to job performance; and using BIB/WABs, different forms of scoreable application forms (SAFs in the UK).

There is already much research on these areas at an individual level. For example, it's well-evidenced that when ability tests are well-designed and appropriate to the job they can predict aspects of individual job performance. But Drogan and Yancey were curious about organisational outcomes: in their case, financial success. Evidence is thinner and equivocal in this domain, so they decided to conduct a fresh investigation to see how these individual promises fare at the organisational level – do they cash out, or do the cheques bounce?

The researchers contacted HR executives from various credit unions across the US and surveyed the 122 respondents on whether they used each of the six practices, giving each organisation an 0-6 overall score. They also gathered publicly available financial data on each credit union, rendered into different measures such as market share growth; a quick review confirms a fair variety in financial performance across the organisations.

However, that variety was not down to the practices used. Firstly, the overall score did not correlate with any of the financial measures. Secondly, on any given measure, the financial success of companies that employed it was no better than that of those who did not. Neither was there any sense of a bedding-in period, with practices becoming more effective over years of use: such an effect was found for only one practice (validation) with just a single financial measure.

The authors conclude that “increasing the technical sophistication of selection procedures alone is not sufficient to influence bottom line results.” They point to other priorities that HR can take: aligning procedures to the unique features of the organisation, or taking an integral approach that recognises that investment in recruitment may be ineffective if this doesn't tie in with how you train new employees. In other words, use a procedure because it's useful here, now, for you, not because it's trumpeted as Best Practice.

ResearchBlogging.orgDrogan, O., & Yancey, G. (2011). Financial Utility of Best Employee Selection Practices at Organizational Level of Performance The Psychologist-Manager Journal, 14 (1), 52-69 DOI: 10.1080/10887156.2011.546194

Wednesday, 18 May 2011

An interview with Jim McKechnie on child employment

(Jim McKechnie is a professor in the Social Sciences department at the University of the West of Scotland. Following his presentation on child employment at the BPS Annual Conference, he was gracious enough to spare some time to explore the issues further; my questions are in bold. This forms part of this month's focus on younger people in the workforce.)

You've spoken of how jobs can have good and bad effects on young people who take them. What's a good example of that?

Well, take the number of hours worked: our research suggests a complex relationship with educational attainment. Students working excessive hours – more than 15 hours/week - have negative consequences in academic attainment. But those working five to six hours a week do better educationally than students who have never worked. Of course, we have to establish the causality, but it's clear that working isn't necessarily a bad thing for schooling.

Beyond the hours worked, are there types of jobs that are less worthwhile – too menial, perhaps?

We need to be cautious and not look at these jobs through adult eyes. The least demanding jobs are those in delivery: not a lot of contact with individuals, not much decision-making. But at the same time, those jobs tend to be taken by people who've never worked before as a first way in to having a full-time job.

As an early experience, it might be demanding to them, as they've never had to get up early before, they've never had to be reliable. And typically, people who start part-time in delivery work go through a sort of career path of part-time jobs, with an 'arc of demand' increasing as they move forward.

Could you talk about how employers are involved?

Well, they tend to seek child employees on the basis of flexibility, rather than cheapness – wages are typically standard, especially for post-16s. Some recognise “a breath of fresh air” that a young person brings into a workplace. For example, they see them as less pedantic than the adult part-time employees they have.

Employers are very variable in how they treat schoolchildren. One response to this would be to recognise good employers in some way. For instance, training provided is very variable. Those employers who do train see the young people as an investment for the future: “I get a good quality employee for a relatively low cost.”

In this sense, it resembles the impetus for many graduate programs.

Yes – and moreover, when these employees move on typically they introduce their friends as a 'next generation' for the business; a free screening process for the employee.

There is a growing recognition among employers that this young group of people are a valuable support system for their business, but it would pay for employers to pay more attention in some cases. It would be worthwhile for the better, more organised employers to introduce contracts when workers hit 16 to ensure they get time off for exam prep, to restrict hours so it doesn't clash with education; to say 'we acknowledge we get the flexibility, so we give something back'.

I was fascinated by your finding that around 20% of your young sample had some supervisory responsibilities.

One example we have is of an individual entering work in a shoe shop at the age of 14. who gained sufficient expertise in technology and methods that by 16 they were used to deal with and on-board new employees.

Now, we know the value of peer to peer tutoring in education, so why not take that model and apply it to business situations? You could imagine having a young person showing others the ropes may be better than a more managerial approach, and avoids potential culture clash.

How about the young people themselves – how can they get more from these early work experiences?

There's a major challenge for young workers themselves, as they tend to undervalue the experience, and don't see the full scope of what they're doing. In education, we use personal development planning to foster self-reflection on academic work. Should we extend this to work experience too?

There is a tension, however. When you talk to young people, one of the major benefits they see in paid work is a growth in their independence and autonomy – a consistent finding in the evidence base. If you try to educationalise that experience, you may be undermining one of its most valuable benefits! If you have to justify to the teacher what you've learned from work, it becomes just another kind of coursework. So we advise treading cautiously, as an opt-in opportunity for those who wish to try it.

How would you like to see the world of psychology participating in this discussion?

From an Occupational Psychology perspective, to ask whether or not we can look at this age group of workers in terms of well-researched features such as job satisfaction, quality of employment experiences, engagement, even issues like stress. There are an array of tools out there but they've been designed for adult populations. Given that an estimated 1.1-1.7m under-sixteens contribute to the economy through part-time jobs, and given we're talking about our future workforce here, this group needs some time under the spotlight.