Showing posts with label ability. Show all posts
Showing posts with label ability. Show all posts

Wednesday, 29 January 2014

Year in Review: Attraction and assessment

Last year as ever we covered research on how we get people into jobs, and how they perform in them. To kick us off, here's a few fascinating findings unearthed by our colleague Christian Jarrett from the Research Digest.

Firstly, it's hard to spot liars. In a study asking participants to watch videos of genuine and bogus accounts of previous jobs, their ability to tell one from the other was barely better than by guessing. But the headline was that many participants were hardened interviewers, yet their performance was no better than those who had never conducted an interview before. Interview experience may still help with validation: in a dynamic interview, techniques are available to probe and explore, which may provide more critical perspectives. But in terms of 'reading the signs', veterans don't do better.

How do we recruit high performers in a competitive field? Increasingly, it seems, organisations are going to greater lengths to stand out from the crowd - see this media account of the Cicada 3301 mystery if you want an extreme example to occupy your afternoon. And research suggests the basic concept is solid: holding everything constant, a less typical method of reaching out to your applicant base, such as a postcard rather than an email, may produce better results: in a recent experiment, providing Google with a response rate of 5% rather than 1%.

Before you get around to assessing your applicants, it’s important to ensure you get suitable people to apply in the first place. A big part of this is candidate quality, but recent research argues that quantity may be more important than we think, especially if we are worried about cheating. Mathematical models suggest that even if cheating is profligate, were you to test enough people – and so free to be more selective, taking the top 20% rather than top 50% - you could end up selecting higher-calibre candidates than if you stuck with a cheat-proof but low-volume process.

What are modern recruitment methods actually assessing? Industry best practice involves identifying criteria that matter to the job, and then trying to obtain a distinct measure of each. But a body of research suggests candidates who do well are often coasting on a meta-ability, the 'ability to identify criteria', meaning how well you can figure out what is expected of you in a situation. Research this year suggests that we may need to accept that this ability, ATIC, is useful for performing the job, just as it is in getting the job, by allowing you to discern the course of action that is likely to satisfy others or fulfil unspoken expectations of managers, customers or stakeholders.. This asks hard questions about how we should design selection processes: high-ATIC candidates can’t show their stuff when assessment criteria are transparent and obvious to all, so might ambiguous jobs be better assessed for using ambiguous processes? A provocative idea to chew on.

We assume extraverts sell more and that cognitive ability is always an asset in jobs. Yet both these taken-for-granted facts were held up for scrutiny this year. Evidence suggested that 'ambiverts' who sit between the extravert and introvert extremes tend to do better in sales roles: in the study in question, earning $151 revenue every hour vs. $115 for the highly extraverted. Meanwhile, a body of research argues that high cognitive ability can actually be a liability for certain types of work [such as?], but a critical review disputes this, claiming that all else being equal, "the smarter you are, the better you will perform on just about any complex task."

Not every candidate can be successful, so it's useful to know who feels hard done by; after all, these people are your customers, partners, or prospective applicants of the future. Research suggests that candidates are likely to believe they were given a fair shake if their personality resembles one of two constellations: Resilient types or 'going with the flow' Bohemians. Those of an Overcontrolling disposition are more liable to feel victimised by unwelcome results.

Sometimes candidates are genuinely victimised. Evidence suggests that candidates with a non-native accent are less likely to be hired, on the pretext that the candidate doesn't appear politically savvy - a nebulous judgment hard to prove or disprove. Employers should ensure that checks and balances are in place to avoid such systematic prejudice squeezing talented individuals from the system.

So what to do, hirers of the world? Be realistic: it may be harder to eliminate cheating than to soften its effects. And your processes may not be purely measuring what you want, but still capturing candidates with the capability to do their job. And, rather than relying on interviewer superpowers, use checks and balances and appropriate weighting to make sure a bogus interview doesn't blow you away. Don't abide by stereotypes: look harder at that quietly confident salesperson, or that impassioned presentation from that entrepreneur with an accent. Cognitive ability remains important for job performance. Ultimately, to catch the best and brightest, it could be down to you to be creative in your recruitment methods.

Friday, 20 December 2013

If you love to multitask, you better have the aptitude to back it up

Over a typical working day, I'll juggle all manner of tasks, some important, some urgent, all competing for attention. Multitasking, in this sense, is common to many a modern workplace, and it's been known for some time that people differ in their enjoyment of it.

Over the last decade, studies have confirmed that people vary also in their ability to multitask. A new study by Kristin Sanderson and colleagues suggests that to understand someone's fit to a multitasking role, it's critical to look at how ability and preference interact.

The study's 119 participants came from a range of professional occupations, which all involved multitasking to an Important or Very Important degree, as rated by independent experts. Participants then completed a computer multitasking assessment, which involved solving two types of task in parallel on a split-screen display. Each participant rated their preference for multitasking, which is termed polychronicity and is defined as how much one enjoys working on multiple tasks in parallel, rather than tackling them more sequentially.  Performance data was also available, based on ratings from their supervisors.

For especially polychronic participants – those scoring one standard deviation above the mean – there was a relationship between their multitasking ability and supervisor ratings: more ability led to better ratings. But those with polychronicity ratings 1 SD below the mean received similar ratings regardless of their ability. The data suggested that their multitasking ability just didn't have consequences, which makes sense: if you choose not to do something, it doesn't matter how good you would have been at doing it.



As you can see above, when multitasking ability is poor, polychronic participants' performance scores fall below those of their monochronic counterparts (the dotted line); perhaps reflecting such individuals biting off more task-juggling than they can chew - although I should emphasise that the study doesn't explicitly test for differences here.

 So, even in a job that calls for multitasking, being highly polychronic is not a straightforward benefit. If you are recruiting for such a role, bear in mind that both will-do and can-do matter.


ResearchBlogging.orgKristin R. Sanderson, Valentina Bruk-Lee, Chockalingam Viswesvaran, Sara Gutierrez, & Tracy Kantrowitz (2013). Multitasking: Do preference and ability interact to predict performance at work? Journal of Occupational and Organizational Psychology, 83, 556-563 DOI: 10.1111/joop.12025

Further reading:
König, C. J., & Waller, M. J. (2010). Time for reflection: A critical examination of polychronicity. Human Performance, 23, 173–190. doi:10.1080/08959281003621703
 

Monday, 8 April 2013

'Figuring out what they're after': a common thread between assessment performance and job performance?

A while back we shared a review of the Ability To Identify Criteria (ATIC), suggesting that difference in how people perform on a selection process like an interview is due partly how good they are at figuring out what the process wants to hear. The article suggested that this may not be entirely bad, as ATIC appears to have a role in job performance as well. Now the authors have published empirical work looking closer at this issue. Their data suggests that figuring out situational demands may have a very substantial hand in both selection and job performance, and may even be the major link between the two.

First author Anne Jansen and colleagues (principally from University of Zürich) recruited 124 participants into a simulated assessment process, pitched as a way to give them experience of job selection. Participants were incentivised to do well, with the top two candidates each day financially rewarded, and had to pay a small fee to enter the process. This encouraged motivated participation that was more in line with real selection experiences. Participants were informed of the job description ahead of time, and on assessment day, turned up in groups of 12 to undertake interviews, a cognitive test, presentations and group discussions, observed by multiple assessors (Occupational Psychology MSc students).

After each exercise, participants were asked to document their hunch of what dimensions it was trying to measure; this was compared to answers given by the assessors beforehand, with close matches leading to higher situational demand/ATIC scores. No such information was explicitly provided (otherwise ATIC becomes redundant) so participants had to rely on indirect cues, such as the job descriptions, reading between the lines of instructions, being sensitive to what assessors seemed to be attuned to. In addition, each participant gave authorisation for their real-work supervisors to be contacted online to give feedback on their real job performance; in total, 107 responded.

Overall assessment centre scores correlated with job performance, with a relationship of .21. Both AC scores and job performance also correlated with the ATIC scores for participants: someone who was savvy in figuring out what the AC asked of them did better in the AC, and also did better in the workplace. Jansen's team constructed a statistical model in which cognitive ability fed ATIC, which itself strongly contributed to performance on assessments and in the workplace. Once all of these factors were accounted for, assessment performance itself was no predictor of workplace performance. This suggests, at the least, that ATIC and the factors that sit behind it are a substantial underpinning of how assessments adequately predict workplace performance.

One way to look at this is the growing identification of 'just another factor': IQ, EI, resilience, practical intelligence - that researchers argue counts in the workplace. But actually, this line of research advocates a shift in perspective. It asks us to accept that performance doesn't just depend on the resources you bring to the job, but to your perception of what the job is. This interactionist perspective is less concerned with raw capability and more about orientation. And it raises new considerations: in jobs where orientation is clear-cut - four duties, get on with it - shouldn't we be minimising it in selection? Whereas at the other extreme, could applicants for jobs with high ambiguity be tasked with finding their own way through the application process?

ResearchBlogging.orgJansen A, Melchers KG, Lievens F, Kleinmann M, Brändli M, Fraefel L, & König CJ (2013). Situation assessment as an ignored factor in the behavioral consistency paradigm underlying the validity of personnel selection procedures. The Journal of applied psychology, 98 (2), 326-41 PMID: 23244223

Further reading: The original review is

ResearchBlogging.orgKleinmann, M., Ingold, P., Lievens, F., Jansen, A., Melchers, K., & Konig, C. (2011). A different look at why selection procedures work: The role of candidates' ability to identify criteria Organizational Psychology Review, 1 (2), 128-146 DOI: 10.1177/2041386610387000

Friday, 15 February 2013

Can more cognitive ability be a liability?

If you want to predict performance at work, you're hopefully aware of the long-investigated benefits that cognitive ability provide in many types of occupation. So hearing about apparently contradictory psychology studies in domains such as 'success under stress' where 'less is more', you're curious. Can cognitive ability - intelligence, roughly told - undermine task performance, or learning adaptability? A critical review by Oswalt and Beier argues that we needn't rewrite our assumptions just yet.

There are differing explanations for why ability could be a liability, all homing in on higher attentional control (through for instance high working memory capacity) biasing one towards highly conscious 'controlled processing', looking for perfect solutions when more automatic processing or loose heuristics do the job better, or at least protect you from falling into debilitating anxiety. If so, we could - we should - favour low-cognitive ability applicants for roles involving the kinds of tasks that show such effects. X and Y draw out three research areas that claim such effects, taking the position that the exceptional findings can be resolved by appraising the complexity of the tasks themselves.

First, 'pressure to perform' research asserts that when attempting difficult tasks, high cognitive ability individuals are more sensitive to pressures such as financial incentives or being observed, and their performance suffers accordingly. This is often construed as lower-ability individuals being more 'adaptable' to pressure. Beier and Oswalt firstly point out that in these studies the performance drop still leaves high-ability individuals doing better than their counterparts, just by a smaller margin than before. They go on to offer another account: under normal conditions, high-ability people do indeed tackle difficult tasks by bringing their attentional resources to bear, which pays off in superior performance. When pressure arises, which can indeed mess with highly conscious strategies, they rely more on the blunt, guessy, heuristic approach which low-ability people were using all along. Under this account, then, higher cognitive ability are *more* adaptable under pressure, changing up their strategies and still coming out top.

A study by DeCaro et al used a procedural learning paradigm where you must figure out the hidden rules to categorise images flashed on screen, varying in colour, shape, number and background. They showed that for simple rules high cognitive ability individuals learned faster, but the situation reversed when the rules were highly complex. Then, low-ability individuals are hypothesised to thrive by going with gut or employing kludgy strategies such as memorising individual successes rather than attempting to generalise to rules. But Beier and Oswalt suggest two problems concerning the goal of the task. The dependent measure of Task-Mastery was based on how long it took a participant to make a run of eight correct responses. But why not five? Or 15? A followup study showed that using 16-trial runs as your criteria, the ability liability disappeared. Perhaps more importantly, the goal was not explicit for participants. If it were, high-ability individuals might have exercised judgment as to whether to bother investing in solving the algorithm, or go for more imperfect tactics, as the reviews believe occurs in pressure to perform situations.

Finally, researchers investigating 'adaptive performance' have suggested that when learning a complex task, such as tank-battle simulations - a sudden change in the task demands (a massive terrain shift) is harder to stomach for those of higher cognitive ability, who experience a larger drop in performance at the point where we leave the familiar old world and bravely enter a new one. Again, this is marshalled as evidence of lack of flexibility due to commitment to one strategy.

In this particular study, the authors' analysis suggests that higher-ability people are not learning at a faster rate than their counterparts (presumably they begin with a higher capability, given that just as in the Pressure to Perform literature they do perform better overall than their counterparts). But through some simple modelling Beier and Oswalt demonstrate that this is hard to believe, as it suggests that in a complex situation with constantly changing demands - hundreds of brave new worlds - higher ability people would get worse and worse with respect to those with less ability, which is a radical claim. Instead, they suggest that the parallel learning rates are due to the analysis approach used, and that in truth the finding is more intuitive: higher ability means you learn a situation more quickly, and thus have more to lose at the moment where the conditions are altered.

This line of research line will continue, as we seek to better understand how performance is influenced by a range of interlocking factors. For now, Beier and Oswalt conclude that their review "is strongly aligned with one of the most consistent findings in over a century of psychological research: Cognitive ability exerts a main effect such that the smarter you are, the better you will perform on just about any complex task, all else being equal."

ResearchBlogging.orgBeier, M., & Oswald, F. (2012). Is cognitive ability a liability? A critique and future research agenda on skilled performance. Journal of Experimental Psychology: Applied, 18 (4), 331-345 DOI: 10.1037/a0030869

Further reading:
 
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.
doi:10.1037/0033-2909.124.2.262

Wednesday, 18 January 2012

2012 resolutions: people differ (so now what?)


This year, time to pay more attention to the fact that our people are different. Sometimes the best resolution is to not act on this - for instance, the fact that people's earnings relate to their weight almost certainly reflects a degree of bias that we would be better to shake off. And some research is currently too provisional to figure out what to do with: if a leader's facial width really is related to company success, the implications for what to do about that are far from clear.

Yet some people may suit some work better, or approach things in very different ways. Here are some thoughts on how to approach this with the evidence in mind.

1. Challenge black-and-white notions of "what good looks like" at work. For instance, two voguish notions are that emotional intelligence is a desirable trait, and impulsivity a problem one. Yet recent research shows that for some behaviours the exact opposite is true. If there is one resolution I would put above all others, it's to take recognise that a person's profile is multivalent, containing good and bad. Overall, the uniqueness of each employee is an asset if deployed correctly; a cookie cutter workplace would be a disaster.

2. Avoid driving your introverts to distraction. Work environments differ in their bustle and activity, and research suggests the introverts bear the brunt of a noisy environment.

3. Leverage the broader assets your people bring to organisations. Extroverts are likely to have large social networks that may help them spread messages or identify resources to solve organisational problems. But don't neglect introverts on this matter either: while they are likely to have fewer contacts overall, their relationships overall are just as deep.

4. Consider people's different expectations for what they get out of work. Some jobs are intrinsically pretty grim - so-called dirty work, like euthanising animals.  If it has to be done, it's worth knowing that people with lower expectations are most likely to take the down and dirty in their stride.  Indeed, high expectations and optimism carry their share of risk in other professions too, with so-called positive pollyannas more likely to leave managerial career tracks if their aspirations aren't quickly met.

5. Don't get sucked in by the claims of the arrogant; it's often they who need attention. It turns out that noisy bluster about the shortcomings of others and personal superiority masks substandard performance. Perhaps this doesn't surprise you, but noting that even their own self-ratings tend to admit to lower performance, we can take this as a starting point to intervene.

And finally...

Let's not let the above suck us into too essentialist a view of who we all are. An awful lot of our performance at work depends on  learned capabilities rather than innate talent. Now, it may be unsurprising to hear that the single most direct predictor of performance for computer programmers is their level of programming knowledge. But how about the discovery that charisma can be trained through the identification of discrete behaviours? So my final resolution for you is to be imaginative about how to develop employee's capabilities.

Monday, 28 November 2011

What makes a great programmer?

Experience and brute brainpower enhance programming skill by helping programming knowledge to build over time, rather than by directly boosting current performance, according to a new article in the Journal of Individual Differences.

Authors Gunnar Rye Bergersen and Jan-Eric Gustafsson put 65 professional programmers through their paces for two straight days, tackling twelve meaty tasks in the Java language to prove their programming skill; this was what the study ultimately wanted to better understand.

Participants all filled in an extensive questionnaire on Java programming knowledge. Some participants also completed a suite of tasks involving memorising items (e.g. letters) while simultaneously handling another task such as checking sentences for errors. These measure working memory, the component of mind that keeps things available for conscious processing, and related to 'g', our proposed fundamental level of mental ability. Unfortunately working memory scores for over half the participants weren't taken due to logistical issues.

The authors modelled the relationships between all variables, including years of work experience, and found the best predictor of programming skill was programming knowledge: it loaded onto skill with a value of .77, where one would mean perfect prediction. Once knowledge was taken into account, a programmer's skill didn't benefit from better working memory or longer experience. Rather, these variables seem to matter earlier in the process by building better knowledge: working memory to help the programmer make sense of complex concepts, experience to provide the time for this to happen.

You can't get by in the programming industry with a static knowledge base, so working memory and a sharp mind will always be in demand in the profession. Indeed, observing that their data found an association between working memory and programming experience, the authors speculate that wannabes with poor working memory are more likely to leave the profession entirely. But this study asks us to recognise that a whizz programmer's competence is thanks to applying that brainpower to learning their trade.

ResearchBlogging.orgBergersen, G., & Gustafsson, J. (2011). Programming Skill, Knowledge, and Working Memory Among Professional Software Developers from an Investment Theory Perspective. Journal of Individual Differences, 32 (4), 201-209 DOI: 10.1027/1614-0001/a000052

Monday, 3 October 2011

Noise and music are more distracting to introverts at work

Many workplaces allow the playing of radio or recorded music during working hours, providing a chance to personalise and brighten the working climate. But how does music affect our ability to perform tasks at work? And does this depend on the kind of person we are? A recent study by a team from University College London sheds more light on this topic.

Stacey Dobbs, Adrian Furnham and Alastair McClelland worked with 118 female schoolchildren (aged 11-18) to investigate how tasks that demand focus are influenced by different kinds of auditory distraction administered over headphones. They developed two soundtracks, one composed of samples of environmental sound like children playing and laughter, and the other a mix of UK garage music. (I'll spare you the embarrassment of reading me trying to describe that.) They also wanted to know whether extraversion had any influence, following previous findings that suggest more introverted people suffer more from auditory distraction, as they are more easily overwhelmed by strong stimuli.

The participants attempted different tasks under the various conditions, and slightly different effects emerged. On a test of abstract reasoning, the participants did best under conditions of silence, and scores suffered less due to music than experiencing noise, when performance was lowest. But the penalties from auditory distraction diminished as extraversion increased, and the most extraverted students performed just as strongly in all conditions. On a test of general cognitive ability, and another of verbal reasoning, the silence and music conditions were comparable, with noise again leading to worst performance. Again, higher extraversion eliminated the penalty from noise.

We should always be careful generalising from a narrow sample (children) to another, although the extraversion effect has been observed before in adult groups (and it's also true that children do form part of our workforce). That said, it's interesting that noise was more disruptive than music across all tasks. The authors suggest that may be partly due to it lacking the positive emotional influence that music can provide; noise isn't designed to delight. They also draw attention to earlier work by the first author, which suggests that the most distracting music is that very familiar to the user. This suggests that an eclectic radio station, or a large and varied play-list, may be a viable alternative to wrestling with background chatter, or slapping that well-worn U2 record on. Again.

ResearchBlogging.orgDobbs, S., Furnham, A., & McClelland, A. (2011). The effect of background music and noise on the cognitive test performance of introverts and extraverts Applied Cognitive Psychology, 25 (2), 307-313 DOI: 10.1002/acp.1692

Monday, 29 August 2011

Are job selection methods actually measuring 'ability to identify criteria'?



While we know that modern selection procedures such as ability tests and structured interviews are successful in predicting job performance, it's much less clear how they pull off those predictions. The occupational psychology process – and thus our belief system of how things work - is essentially a) identify what the job needs b) distil this to measurable dimensions c) assess performance on your dimensions. But a recent review article by Martin Kleinman and colleagues suggests that in some cases, we may largely be assessing something else: the “ability to identify criteria”.

The review unpacks a field of research that recognises that people aren't passive when being assessed. Candidates try to squirrel out what they are being asked to do, or even who they are being asked to be, and funnel their energies towards that. When the situation is ambiguous, a so-called “weak” situation, those better at squirrelling – those with high “ability to identify criteria” (ATIC) - will put on the right performance, and those that are worse will put on Peer Gynt for the panto crowd.

Some people are better at guessing what an assessment is measuring than others, so in itself ATIC is a real phenomenon. And the research shows that higher ATIC scores are associated with higher overall assessment performance, and better scores specifically on the dimensions they correctly guess. ATIC clearly has a 'figuring-out' element, so we might suspect its effects are an artefact of it being strongly associated with cognitive ability, itself associated with better performance in many types of assessment. But if anything the evidence works the other way. ATIC has an effect over and above cognitive ability, and it seems possible that cognitive ability buffs assessment scores mainly due to its contribution to the ATIC effect.

In a recent study, ATIC, assessment performance, and candidate job performance were examined within a single selection scenario. Remarkably it found that job performance correlated better with ATIC than it did with the assessment scores themselves. In fact, the relationship between assessment scores and job performance became insignificant after controlling for ATIC. This offers the provocative possibility that the main reason assessments are useful is as a window into ATIC, which the authors consider “the cognitive component of social competence in selection situations”. After all, many modern jobs, particularly managerial ones, depend upon figuring out what a social situation demands of you.

So what to make of this, especially if you are an assessment practitioner? We must be realistic about what we are really assessing, which in no small part is 'figuring out the rules of the game'. If you're unhappy about that, there's a simple way to wipe out the ATIC effect: making the assessed dimensions transparent, turning the weak situation into a strong, unambiguous one. Losing the contamination of ATIC leads to more accurate measures of the individual dimensions you decided were important. But overall your prediction of job performance measures will be weaker, because you've lost the ATIC factor which does genuinely seem to matter. And while no-one is suggesting that it is all that matters in the job, it may be the aspect of work that assessments are best positioned to pick up.

ResearchBlogging.orgKleinmann, M., Ingold, P., Lievens, F., Jansen, A., Melchers, K., & Konig, C. (2011). A different look at why selection procedures work: The role of candidates' ability to identify criteria Organizational Psychology Review, 1 (2), 128-146 DOI: 10.1177/2041386610387000

Tuesday, 19 July 2011

Interview decisions are influenced by initial rapport

Research last year demonstrated that interviewees are judged according to their early rapport with the interviewer, even when a highly structured interview format is followed. The same team have now put this finding to the replication test and dug deeper into its causes.

Murray Barrick and colleagues gathered 135 student volunteers keen to improve their interview skill, and put each through two interviews with different interviewers from a pool of business professionals. Each interview proper was firmly structured with predefined questions on competency areas, but commenced with a few minutes of unstructured rapport building. Each interviewee was rated in terms of initial impressions just after the rapport stage, and their interview responses evaluated at the end of the interview. Just as in the 2010 study, the early impressions and final interview ratings strongly correlated.

The judgements we form from first impressions are rarely arbitrary but capture information about the other person, so it's possible the influence of pre-interview rapport isn't sheer bias. Through personality testing, Barrick's team found that first impressions were strongly related to interviewee extraversion, emotional stability, agreeableness, and conscientiousness. Conscientiousness is generally associated with better job performance, and tied into several of the study competencies such as 'work ethic' and 'drive for results'. The other traits, while not necessarily desirable in all roles, can appear attractive qualities in a prospective organisational member.

Initial impressions also correlated with volunteers' self-perception of how qualified they were for the job, and also with an independent measure of verbal skill. The latter was assessed through a separate task where the volunteers interacted face-to-face with a series of peers who rated features such as articulacy of speech. These findings suggest that the rapport-building stage was giving early insight into some sense of perceived fit to the specific role, as well as genuine candidate ability, in addition to personality factors. By careful analysis, the researchers found that all of these factors influenced the final interview ratings, and that this was due to the way they shaped first impressions: after those first few minutes, there was little extra influence of these qualities across the rest of the interview.

As social animals we're reluctant to do away with rapport altogether, and impressions can form even in snatches of seconds. The researchers suggest – with the caveat of more research - that interviewers may as well embrace the first impression, explicitly evaluating some relevant criteria, such as those identified in this study, once the rapport stage is over. And candidates shouldn't unduly panic: this study reveals that the first impression is partly down to an accurate appraisal of some of your true qualities, things you can't do very much about.

ResearchBlogging.orgBarrick, M., Dustin, S., Giluk, T., Stewart, G., Shaffer, J., & Swider, B. (2011). Candidate characteristics driving initial impressions during rapport building: Implications for employment interview validity Journal of Occupational and Organizational Psychology DOI: 10.1111/j.2044-8325.2011.02036.x

Wednesday, 29 June 2011

Best practices may not be best for your organisation

If your organisation puts time and effort into implementing best practise HR methods, such as ability testing, it must be reassuring to to know it all pays off in the end. Or does it? A recent study involving US financial organisations casts doubt on this belief.

Oksana Drogan and George Yancey were interested in six recruitment technologies generally considered as 'best practice': job analysis to see what a candidate needs to perform well; monitoring the effectiveness of recruitment sources; using ability tests; structuring interviews; using validation studies to establish whether recruitment performance translates to job performance; and using BIB/WABs, different forms of scoreable application forms (SAFs in the UK).

There is already much research on these areas at an individual level. For example, it's well-evidenced that when ability tests are well-designed and appropriate to the job they can predict aspects of individual job performance. But Drogan and Yancey were curious about organisational outcomes: in their case, financial success. Evidence is thinner and equivocal in this domain, so they decided to conduct a fresh investigation to see how these individual promises fare at the organisational level – do they cash out, or do the cheques bounce?

The researchers contacted HR executives from various credit unions across the US and surveyed the 122 respondents on whether they used each of the six practices, giving each organisation an 0-6 overall score. They also gathered publicly available financial data on each credit union, rendered into different measures such as market share growth; a quick review confirms a fair variety in financial performance across the organisations.

However, that variety was not down to the practices used. Firstly, the overall score did not correlate with any of the financial measures. Secondly, on any given measure, the financial success of companies that employed it was no better than that of those who did not. Neither was there any sense of a bedding-in period, with practices becoming more effective over years of use: such an effect was found for only one practice (validation) with just a single financial measure.

The authors conclude that “increasing the technical sophistication of selection procedures alone is not sufficient to influence bottom line results.” They point to other priorities that HR can take: aligning procedures to the unique features of the organisation, or taking an integral approach that recognises that investment in recruitment may be ineffective if this doesn't tie in with how you train new employees. In other words, use a procedure because it's useful here, now, for you, not because it's trumpeted as Best Practice.

ResearchBlogging.orgDrogan, O., & Yancey, G. (2011). Financial Utility of Best Employee Selection Practices at Organizational Level of Performance The Psychologist-Manager Journal, 14 (1), 52-69 DOI: 10.1080/10887156.2011.546194

Thursday, 17 March 2011

Emotional Intelligence: What can it really tell us about leadership?

On the heels of last month's post on a possible further component of emotional intelligence (EI), the Academy of Management Perspectives has just published a review of how EI relates to leadership. Is EI the primary driver of effective leadership? Or is evidence of its relevance to leadership “non-existent”?

A team of authors led by Frank Walter of the University of Groningen step in to arbitrate, reviewing past research as three distinct streams, an idea introduced by Catherine Ashkanasy and Neal Daus in 2005. The first stream contains research using standardised tests to measure employee's emotional such as emotion perception. Research within the second uses a rating method to make its measurements, trusting that we can accurately judge these abilities in ourselves or others. The third uses a broader definition, popular due to its power to predict work outcomes, but criticised as “including almost everything except cognitive ability”, which is less useful when we're trying to differentiate components of leadership.

The authors argue that by differentiating the streams we better detect when a case for a particular phenomena is supported by converging evidence – agreement across different streams. And such converging evidence exists for leadership effectiveness, examined through outcomes including higher effort, satisfaction, performance and profit creation within the team managed; all three streams agree on a role for EI. Similarly, there is a general consensus that EI relates to leadership emergence, the degree to which someone can manifest as a leader in situations where they lack formal authority.

The three-streams view also helps expose where evidence is gappy, as it is for specific leadership behaviours and styles. Can EI predict transformational leadership, a charismatic, visionary style that stimulates its followers? Definitely, if we consider streams two and three. But the stream one, hard ability EI evidence is thinner on the ground. For other leadership styles, such as the laissez-faire leader, the evidence is also unclear. For Walter and his colleagues, the jury is definitely out, as they believe that data from stream one is the best foundation for understanding what incremental value EI gives over and above other factors like personality.

The authors conclude that there is encouraging evidence that EI is a useful construct for understanding leadership, but warn that “the pattern of findings reported in the published literature suggests that EI does not unequivocally benefit leadership across all work situations.” They call for more stream one evidence, and insist there is a need to consistently control for both personality and cognitive ability, a step taken in only a single study reviewed.

Finally, the Digest HQ welcome their entreaty that “incorporating EI in leadership education, training, and development should proceed on strictly evidence-based grounds, and it should not come at the expense of other equally or even more important leadership antecedents.”

Happily, the review is freely available to access from the site of Michael Cole, one of its authors.


ResearchBlogging.org Frank H. Walter, Michael S. Cole, & Ronald H. Humphrey (2011). Article: Emotional Intelligence: Sine Qua Non of Leadership or Folderol? Academy of Management Perspectives, 25 (1), 45-59

Monday, 28 February 2011

Division of Occupational Psychology conference report


I'll close up our first full month here at the Occupational Digest with the first of a few reports on the Division's annual conference which ran 12-14th of January this year.
I was engaged and provoked by Timothy Judge's Myers Lecture, challenging “The illusions under which we labour”. His sights were on the “situational premise”: the idea that environment and context matter in explaining human behaviour, thus allowing occupational psychology to fixate on culture, recommend interventions, and believe in change. Judge examined this assumption via a wide-ranging tour of findings from behavioural genetics, such as the heritability of altruism, together with evidence of how humans quickly adapt to a new status quo; key examples here included how both marriage and lottery wins have only a transient impact on your levels of life satisfaction.
Judge ended by suggesting that because people are difficult to change, we should place more focus on recruiting the right type of people, redesigning jobs to fit people and leveraging strengths rather than trying to fix weakness; all laudable activities, I feel, and each of them currently practised in the profession (the first of those frankly dominates the industry!). The conclusion itself was less convincing, and I think he would have to be armed with a more systematic argument, based on evidence that tied directly to the methods and objectives in question, in order for organisational psychologists (and educators, therapists, army trainers...) to abandon their belief that individuals can change to become more effective at accomplishing goals.
A later talk by Steve Woods looked at ethnic differences in ability test scores. Occupational test users are sensitive to 'adverse impact' - disproportionately favouring people from one group over another – so this topic has been well researched, including using meta-analysis, which looks for patterns over a set of studies. Woods cites Roth et al’s (2001) meta-analysis which suggests a difference in means between black and white test-takers of up to 1D: loosely, this means a squarely average white candidate would score similarly to a black candidate who was sharper than nearly 85% of the black population. Evidence suggests the difference genuinely reflects group differences in ability, rather than issues with testing, with researchers disputing whether the effect reflects innate differences or cultural ones such as access to education.
Meta-analyses tend to collapse all the available data to ensure their overview is as authoritative as possible. Woods points out that by separating out the data instead, we can see whether the difference alters over time. This would give credence to the cultural cause, as genetic changes at that scale would be negligible, but cultural changes, especially for disadvantaged communities often targeted by public policy, can be more substantial. Woods and colleagues were interested in scores that reflected ‘g’, the general factor of intelligence, and considered only scores from tests that measured two or more of its subcomponents (eg numerical and verbal ability). The samples included were healthy Americans over the age of sixteen from ninety-one different samples, resulting in 1.1 million test scores, grouped into four decades from the 60s to 90s.
One unexpected finding was a spike in D, the black-white difference, when you move from the 60s to 70s. This isn't predicted by either distributional or cultural accounts, but makes sense if you think of the period before civil rights as one of limited opportunity for black people. Consequently, test taking would only be available to fairly exceptional individuals, ‘restricting the range’ to those likely to score better. Putting this decade aside, the overall trend was for a shrinking of D, closing down to around .3. Woods argues that this data changes the question from 'if' to 'how much' of the variance is due to cultural and developmental factors.
The talk was interesting especially in the light of Tim Judge’s keynote; here we saw evidence on fixed vs mutable differences in an organisational context, and, here at least, the score was culture one, genes nil.

Details about the 2011 DOP conference: http://www.bps.org.uk/dop2011/

Tuesday, 15 February 2011

Influencing others by showing emotion: a new emotional ability?


Many workplaces recognise that besides more cognitive notions of intelligence – our capability to solve problems, use logic, process and judge factual information – they also need Emotional Intelligence (EI): the capability to recognise, make the most of and manage emotion. Now a new theoretical paper makes the case that we should be expanding this concept of EI to include the ability to influence others through emotional displays.

EI currently focuses on spotting, dealing with and making sense of emotions. Can I figure out why I was feeling increasingly uneasy through the meeting? Spot how you are feeling right now? Guess what might cheer you up? Authors Côté and Hideg focus their attention on another feature of emotions: that we display them physically to others in emotion displays. This insight goes back to Darwin, and has since been extensively researched notably by Paul Ekman (whose work is popularised in the TV series Lie to Me) with the field now recognising that the face, voice and touch are all used for this purpose. Emotional displays, even subtle ones, can cause our heart rate to rise, our skin to sweat, and our emotions to swell, often to then be displayed onwards in ripples of emotional contagion, such as when laughter gathers any within earshot.

Côté and Hideg draw attention to the workplace consequences of these displays. Anger at those who have neglected their duties can provoke them to redouble their efforts, guilt displays increase the likelihood of forgiveness, and positive emotions can result in more pro-social behaviour. Clearly there is an advantage to being adept at these displays, and the authors point out at least two ways in which one can be better. One is displaying the right emotion for the situation; considerations include the communication medium, as some emotions, such as anger, are displayed more strongly via the voice than the face (and the reverse can be true). Another is displaying that emotion effectively, facilitated by approaches such as 'deep acting' which tries to change the emotion itself, contrasting surface acting, which just acts on behaviour and can be perceived as inauthentic. (You can decide for yourself what's going on in the photo above.)

Côté and Hideg amass research showing genuine variety in how well people can influence others through displays, for instance the ability of bill collectors to communicate urgency to debtors. They argue that all this evidence suggests a real human capability that shows individual differences, concerns emotions, and can result in better or worse outcomes. On this basis, they call for it to be considered as a new emotional ability within the Emotional Intelligence framework.

In an illuminating section the paper explores how influencing others through emotional displays also relies on another: the intended recipient. They may fail to recognise the display if they come from a different culture with different cues. They may be unmotivated to give their attention to your display, because they don't trust you, because they hold the power in the interaction and are blase about how you may feel, or because they don’t see the value in trying to understand the situation (what the authors refer to as epistemic motivation). There is evidence for each of these factors moderating the effect of emotion displays.

We all know that people are influenced by the emotional reactions of those around them. But it’s valuable to recognise the ways this does and doesn’t work, know its genuine workplace consequences, and be aware that this may be better treated as an ability, rather than an unaccountable influence in the workplace. This paper does a fine job of this, drawing together a wealth of evidence, and because this research is clear, readable, and released in the freely-accessible Organizational Psychology Review, I'd encourage having a look yourself.

ResearchBlogging.org Côté, S., & Hideg, I. (2011). The ability to influence others via emotion displays: A new dimension of emotional intelligence Organizational Psychology Review, 1 (1), 53-71 DOI: 10.1177/2041386610379257