Friday, 15 February 2013

Can more cognitive ability be a liability?

If you want to predict performance at work, you're hopefully aware of the long-investigated benefits that cognitive ability provide in many types of occupation. So hearing about apparently contradictory psychology studies in domains such as 'success under stress' where 'less is more', you're curious. Can cognitive ability - intelligence, roughly told - undermine task performance, or learning adaptability? A critical review by Oswalt and Beier argues that we needn't rewrite our assumptions just yet.

There are differing explanations for why ability could be a liability, all homing in on higher attentional control (through for instance high working memory capacity) biasing one towards highly conscious 'controlled processing', looking for perfect solutions when more automatic processing or loose heuristics do the job better, or at least protect you from falling into debilitating anxiety. If so, we could - we should - favour low-cognitive ability applicants for roles involving the kinds of tasks that show such effects. X and Y draw out three research areas that claim such effects, taking the position that the exceptional findings can be resolved by appraising the complexity of the tasks themselves.

First, 'pressure to perform' research asserts that when attempting difficult tasks, high cognitive ability individuals are more sensitive to pressures such as financial incentives or being observed, and their performance suffers accordingly. This is often construed as lower-ability individuals being more 'adaptable' to pressure. Beier and Oswalt firstly point out that in these studies the performance drop still leaves high-ability individuals doing better than their counterparts, just by a smaller margin than before. They go on to offer another account: under normal conditions, high-ability people do indeed tackle difficult tasks by bringing their attentional resources to bear, which pays off in superior performance. When pressure arises, which can indeed mess with highly conscious strategies, they rely more on the blunt, guessy, heuristic approach which low-ability people were using all along. Under this account, then, higher cognitive ability are *more* adaptable under pressure, changing up their strategies and still coming out top.

A study by DeCaro et al used a procedural learning paradigm where you must figure out the hidden rules to categorise images flashed on screen, varying in colour, shape, number and background. They showed that for simple rules high cognitive ability individuals learned faster, but the situation reversed when the rules were highly complex. Then, low-ability individuals are hypothesised to thrive by going with gut or employing kludgy strategies such as memorising individual successes rather than attempting to generalise to rules. But Beier and Oswalt suggest two problems concerning the goal of the task. The dependent measure of Task-Mastery was based on how long it took a participant to make a run of eight correct responses. But why not five? Or 15? A followup study showed that using 16-trial runs as your criteria, the ability liability disappeared. Perhaps more importantly, the goal was not explicit for participants. If it were, high-ability individuals might have exercised judgment as to whether to bother investing in solving the algorithm, or go for more imperfect tactics, as the reviews believe occurs in pressure to perform situations.

Finally, researchers investigating 'adaptive performance' have suggested that when learning a complex task, such as tank-battle simulations - a sudden change in the task demands (a massive terrain shift) is harder to stomach for those of higher cognitive ability, who experience a larger drop in performance at the point where we leave the familiar old world and bravely enter a new one. Again, this is marshalled as evidence of lack of flexibility due to commitment to one strategy.

In this particular study, the authors' analysis suggests that higher-ability people are not learning at a faster rate than their counterparts (presumably they begin with a higher capability, given that just as in the Pressure to Perform literature they do perform better overall than their counterparts). But through some simple modelling Beier and Oswalt demonstrate that this is hard to believe, as it suggests that in a complex situation with constantly changing demands - hundreds of brave new worlds - higher ability people would get worse and worse with respect to those with less ability, which is a radical claim. Instead, they suggest that the parallel learning rates are due to the analysis approach used, and that in truth the finding is more intuitive: higher ability means you learn a situation more quickly, and thus have more to lose at the moment where the conditions are altered.

This line of research line will continue, as we seek to better understand how performance is influenced by a range of interlocking factors. For now, Beier and Oswalt conclude that their review "is strongly aligned with one of the most consistent findings in over a century of psychological research: Cognitive ability exerts a main effect such that the smarter you are, the better you will perform on just about any complex task, all else being equal."

ResearchBlogging.orgBeier, M., & Oswald, F. (2012). Is cognitive ability a liability? A critique and future research agenda on skilled performance. Journal of Experimental Psychology: Applied, 18 (4), 331-345 DOI: 10.1037/a0030869

Further reading:
 
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.
doi:10.1037/0033-2909.124.2.262

Friday, 8 February 2013

Saving the world through psychology


Can Occupational Psychology play a part in saving the world? Absolutely, insisted Prof Stuart Carr in his keynote presentation at the DOP conference. After all, work is deeply woven into the world, so transforming one can influence the other. Carr brought this home through examples of the United Nation's 2015 Millennial Development Goals, which include reduction of poverty, which manifests in the wages that workers derive; education, which depends on the capability of teachers and other staff; and gender equality, which can be combated in the workplaces in which we spend much of our waking hours.

This exemplifies a humanitarian approach to work psychology, ensuring decent work for all workers and ensuring that the work they do meets responsibilities towards multiple stakeholders, rather than the bottom line. Carr provided some examples of how he and collaborators are making inroads into this, for instance by organising a Global Special Issue on Psychology and Poverty reduction that spanned multiple journals, raising awareness of how psychology can point at these issues.

Carr also raised another way to use psychology to improve the world: by applying it directly to the conditions of those involved in Humanitarian Work. These roles can involve risk and be demanding, so it would be useful to investigate this and take steps to foster well-being. And any way to improve the impact of the humanitarian work itself would obviously be beneficial. Carr reported on the creation of online networks such as Humanitarian Work Psychology that connect researchers,  students and those on the ground, who are commonly isolated, to allow them to share knowledge and put it to work on actual problems.

So we can change the world through 'Humanitarian' Work Psychology to make conditions of work decent everywhere, coupled with 'Humanitarian Work' Psychology that focuses attention on those aspiring to be levers of change in the world. Further examples abounded in the presentation, including a global task force to address pay disparities in humanitarian work: the dual pay levels for foreign and national staff causing distancing of the two groups due to negative appraisals - the former rationalising the latter's low pay as reflecting their capability, the latter becoming demotivated and distrustful of the attitude of the foreigners, causing a vicious cycle.

There is much more to do, and the keynote was a call to arms to the profession as a whole. As Carr reminded us, much occupational psychology work developed in the Peace Corps in the 1960s and following, and only later became concentrated in focus on the for-profit sector. A shift is possible and long overdue. Carr likened this to a Koru, the fern frond native to many countries including his home in New Zealand, whose spiral shape suggests a return to beginnings, and whose swift unfurling denotes the possibility of change.

ResearchBlogging.orgCarr, S., McWha, I., MacLachlan, M., & Furnham, A. (2010). International–local remuneration differences across six countries: Do they undermine poverty reduction work? International Journal of Psychology, 45 (5), 321-340 DOI: 10.1080/00207594.2010.491990

Thursday, 31 January 2013

Do test cheats matter if you test enough people?


Over the past decade, the cheapness and convenience of online testing has seen its usage grow tremendously. Its critics raise the openings it makes for cheaters, who might take a test many times under different identities, conspire with past users to identify answers, or even employ a proxy candidate with superior levels of the desired trait. Its defenders point to countertactics, from data forensics to follow-up tests taken in person. But the statistical models employed by researchers Richard Landers and Paul Sackett suggest that in recruitment situations, the loss of validity due to online cheating can be recovered simply due to the greater numbers of applicants able to take the test.

Landers and Sackett point out that test administrators normally intend to select a certain volume of candidates through testing, such as the ten final interviewees. The accessibility factor of online testing could allow you to grow your candidate pool, say from 20 to 50. Considering these numbers, its possible to now select those that scored better than 80% of the other candidates, rather than merely those in the top half. And if some of your candidates cheat, oomphing their scores to the 82nd percentile when they only deserve the 62nd, that's still a better calibre than the 50-or-better you would have been prepared to accept from your smaller face-to-face pool.

Landers and Sackett moved from these first principles to modelling out some realistic large data sets containing a range of true ability scores. They considered sets where cheating gave a small (.5 SD improvement) or large (1 SD) bonus to your test score; against this was another factor, how much your natural ability influenced your likely to cheat, from no relationship, r=0, into increasingly strong negative relationships, from -.25 to -.75, modelling the idea that weaker performers are more likely to cheat. And finally, they varied the prevalence of cheating in increments from zero up to 100%.

The researchers ran simulations in each data set by picking a random subset - the 'candidate pool' - and selecting the half of the pool with better test scores. In the totally honest datasets, the mean genuine ability score of selected candidates was .24. but that value was lower for sets that contained cheaters, as some individuals passed without deserving it. Landers and Sackett then added more candidates into each pool, allowing pickier selection, and reran the process to see what true abilities were obtained. In many data sets the loss of validity due to cheating was easily compensated by growth of applicant pool. For instance, if cheating has only a modest effect and is only mildly related to test ability (r= -.25) then doubling the applicant pool yields you genuine scores of .24 even when 70% of candidates are cheating, and higher scores when the cheaters are fewer in number, such as .31 for 30% cheaters.


Great...but wait. there are two important take-aways relating to fairness. It's true that if we're getting .31 averages instead of .24, our selected candidates should be more job-capable, even some of those who did cheat, and that's a win for whoever's hiring. But in the process we've rejected people who by rights deserved to go through. Essentially, this is a form of test error, and so not a uniquely terrible problem, but it's one we shouldn't become complacent about just because the numbers are in the organisation's favour.

Secondly, and as anyone trained in psychometric use will be aware, increasing selection ratios from top 50% to top 25% is no casual prerequisite. Best practice is that without evidence, such as an inhouse validity study, cut-offs on a single test should be capped at the 40th percentile, meaning you pass 60% of candidates. In particular, raising thresholds can have adverse impact on minority groups, on whom many tests still show differentials (although these are closing over time). As minorities tend to make up a minority of any given applicant pool, such differentials can easily squeeze the diversity out of the process before you even get a chance to sit down with candidates and see what they have to offer in a rounded fashion.

Nevertheless, this paper brings a fresh angle to the issue of test security.


ResearchBlogging.orgLanders, R., & Sackett, P. (2012). Offsetting Performance Losses Due to Cheating in Unproctored Internet-based Testing by Increasing the Applicant Pool International Journal of Selection and Assessment, 20 (2), 220-228 DOI: 10.1111/j.1468-2389.2012.00594.x

Further reading:

Tippins, N. T. (2009). Internet alternatives to traditional proctored testing: Where are we now? Industrial and Organizational Psychology, 2, 2–10.

Finding the balance between work and home

(We're reporting from this month's Division of Occupational Psychology conference at the Digest. This post is by Dr Jon Sutton, Managing Editor of The Psychologist, and will also feature in that magazine's March issue. @jonmsutton / @psychmag)



Who is responsible for work-life balance? The individual, the organisation, or even the legislative system? That was the question posed at the start of this symposium, and it became clear that ‘one size fits all’ policies and practices don’t exist: we must understand needs and wants in order to tailor solutions.

First up was Dr Ellen Ernst Kossek (Purdue University, US), who identified the importance of feeling in psychological control of boundaries. Based on three validated measures of ‘cross role interruption behaviours’, ‘boundary control’ and ‘work-family identity centralities’, Kossek outlined different profiles. You’re either an ‘integrator’, or a ‘separator’, or you cycle between the two: a ‘volleyer’. Add in consideration of whether your well-being level is high or low and you end up with six styles, including the ‘fusion lovers’ who are happy to integrate work and family life, the ‘job warriors’ who volley away to their heart’s discontent, and ‘captives’ who are the separators with low well-being.

The image of Winston Churchill in his pyjamas, as an early remote worker, cast a large shadow over the talk by Dr Christine Grant and colleagues from Coventry and Warwick Universities. Grant described her work to outline competencies related to the effective e-worker, and to develop an assessment tool. Organisations can provide training for existing and new e-workers, Dr Grant said, before leaving us with the thought that ‘a good manager is always a good manager; a bad manager is worse as an e-worker’.

It’s one thing taking your work home with you when you’re an academic or editor, but another entirely when you’ve just been pulling a family out of some motorway wreckage. Dr Almuth McDowall (University of Surrey) looked at work-life balance self-management strategies in the police force, eliciting 134 behaviours from semi-structured interviews. Some were context-specific, for example in the police it’s actually very important not to take work home with your, as it is confidential and often intrusive material. McDowall highlighted the importance of communication and negotiation over work-life balance, and suggested that there is a separate competence for line managing work-life balance in others.

Finally, Professor Gail Kinman (University of Bedfordshire) tackled a subject close to home for many: work-life conflict in UK academics. She noted that academics vary in the extent to which they wish their roles to be integrated, with many highly absorbed in the job role and most working considerably over the 48 hour working time directive. In Kinman’s survey of 760 academics in at least 99 universities, most academics weren’t getting the separation they wanted. Working at home and ICT use predicted work-life conflict. Kinman called for enhanced sensitivity to variation in boundary management styles and preferences amongst colleagues and supervisors, citing the example of sending e-mails at the weekend as potentially role modelling that behaviour for the recipient.

Another interesting point to emerge from the symposium is that most measures of work-life balance are focused on the impact on families, despite the fact that it’s an issue for the single and childless as well.

Further reading:

ResearchBlogging.orgErnst Kossek, E., Lewis, S., & Hammer, L. (2009). Work--life initiatives and organizational change: Overcoming mixed messages to move from the margin to the mainstream Human Relations, 63 (1), 3-19 DOI: 10.1177/0018726709352385

Wednesday, 30 January 2013

Are organisations led by the limbic system?


(We're reporting from this month's Division of Occupational Psychology conference at the Digest. This post is by Dr Jon Sutton, Managing Editor of The Psychologist, and will also feature in that magazine's March issue. @jonmsutton / @psychmag)



According to keynote speaker Gerard Hodgkinson (Professor of Strategic Management and Behavioural Science at Warwick Business School), ‘Descartes’s error is alive and well in the workplace’. In a bold and wide-ranging address, Hodgkinson made the case for why and how occupational psychology needs to connect with the social neurosciences.


Hodgkinson is bringing psychology into the field of strategic management, trying to help decision makers become more rational. Take how organisations tend to respond to a major threat or opportunity (HMV and Blockbuster come to mind as I write this). Usually there are small, incremental changes, and when it becomes apparent this isn’t sufficient, what does the organisation do? Nothing. There is a period of ‘strategic drift’. Then there is a period of ‘flux’, which on Hodgkinson’s graphic representation looks rather like a tailspin. This is followed by ‘phase 4’, ‘transformational change’ or ‘complete demise’.

But to what extent can psychology shed light on this process? Hodgkinson’s 2002 book ‘The Competent Organization’ argued the case for the centrality of the psychological contribution to organisational learning and strategic adaptation, yet 11 years on, he said, there was still only a passing consideration of affective and non-conscious cognitive processes. Why do we continue to sidestep it?

Using examples from his practice, Hodgkinson demonstrated how strategising is both an inherently cognitive and affective process. Eliciting a cognitive taxonomy from senior figures in a UK grocery firm, he found that although the market conditions had changed dramatically, mental models – individually and collectively – had not. Decision makers were slaves to their basic psychological processes, for example still focusing on the ‘magic number’ of ‘7 plus or minus 2’ competitors.

Hodgkinson showed how he confronts strategic inertia in top management teams, stimulating individual cognitive processes by scenario analysis. Some organisations excel at this: Hodgkinson claims that Shell closed all their facilities within 45 minutes of 9/11. While others were still struggling to comprehend what was happening, their scenario planning had allowed them to take quick and decisive action.

Hodgkinson’s latest research draws on social cognitive neuroscience and neuroeconomics to develop a series of counterintuitive insights. His hope is that these can teach people to be more skilled in their control of their emotional, limbic system. True rationality, he concluded, is the product of the analytical and experiential mind.

Further reading:

ResearchBlogging.orgHodgkinson, G., & Healey, M. (2008). Cognition in Organizations Annual Review of Psychology, 59 (1), 387-417 DOI: 10.1146/annurev.psych.59.103006.093612 Pdf freely available here.

Friday, 18 January 2013

One personality to rule them all?


(We're reporting from this month's Division of Occupational Psychology conference at the Digest. This post is from regular editor Alex Fradera, and the report will also feature in March issue of The Psychologist magazine.)


Until recently I was pretty ignorant of the idea of a general factor of personality, a situation which undoubtedly hurt my psychology-nerd cred. I'm back on track now, thanks to an afternoon spent in Rob McIver's symposium on the matter.

The general factor of personality, or GFP, is analoguous to g, the intelligence quotient that predicts to differing degrees the multiple intelligences - verbal, musical, numerical -that sit below it. (The symposium reminded us that whereas Spearman posited g in the 1900s, and Thurstone the differential intelligence model in the twenties, it took until the 1950s for Phillip Vernon to reconcile the models).

While practitioners who use personality emphasise its differential qualities - many facets, no one right profile - the academics who advocate GFP say that on the contrary, there is such a thing as having lots of personality, and that this global factor is meaningful, predicting a range of life outcomes. Critics say this may be down to statistical artefacts, such as an individual's desire for social desirability influencing all their questionnaire responses. So this symposium took us into the science, and particularly what it means for practitioners.

The first session, given by Rainer Kurz of Saville Consulting, was the most technical in focus, introducing a way to get a GFP simply by summing raw scores on each Big Five personality measure. It's an intuitive approach that in his dataset of 308 mixed roles proved as valid in predicting job performance as the standard approach (extracting the 'first unrotated principle component') while avoiding some fiddly statistical issues. However, the GFP was not comprehensive, as after partialling out its variance he found significant influences of personality subcomponents remained, notably assertiveness and achievement. This suggests the add-up method doesn't quite account for their influence. He concluded that this was a promising recipe but the approach will take refining.

His colleague Rob McIver chose to put aside notions of 'the ideal GFP' to explore total personality scores that predict success on a particular capability-set - in most cases, a job. Rather than relying just on factor extraction or the add-it-all-up approach, this starts by developing and shaping tests to pre-fit the criterion you care about.

McIver's data drew on external raters who had judged various facets of workplace effectiveness for the same individuals described by Kurz in his earlier presentation. The individuals had also completed seven different personality tests, and McIver explained how he generated a total personality score for each one using a criterion approach: personality dimensions were mapped onto effectiveness based on logic and previously reported relationships, meaning some dimensions were weighted heavily and others not at all if judged irrelevant to effectiveness. McIver showed how their own questionnaire, developed from the ground up around these effectiveness factors, produced the most powerfully predictive total scores, with an r up to .32.

McIver went further, producing a personality super-score for each participant by totalling all seven tests together. Would it work, given that many of these questionnaires were not developed with this effectiveness framework in mind? It turns out that united they stand, pretty well, with a validity of .27, thanks in some part to the criterion-based pruning and weighing. McIver concluded that this approach may be more profitable than searching for one true GFP.

Between these two talks Rob Bailey of OPP took the floor to question whether, in any case, true GFPs could truly be useful for practitioners. He points out that the literature tends to describe the general factor as reflecting people who are relaxed, sociable, emotionally intelligent, satisfied with life, and altruistic - and that a low score means the opposite of these things. He challenged the symposium to imagine cases where such information could be provided to an individual in any constructive fashion, compared to the conventional profiling approach.

Bailey then went to the data, in this case taken from over 1,200 individuals paid to complete a 16-factor personality questionnaire, the absence of career implications giving them little incentive to 'fake nice' and apply spin to their results. His component analysis suggested the personality data could reduce to two factors, not fewer, and he showed how opting to use the dual factors rather than the 16 original ones weakened the ability to predict variables such as job satisfaction, dropping coverage from 9.3% of the variance to 7.5%.

He concluded that granularity, not fat factors, may be a better bet for predictive power, and also cautioned that the differences he finds (no single factor, more value in the parts than a whole) may result from using a personality measure that isn't built to the specifications of the Big Five, and that in fact dependence on that model may be under-valuing the diversity, and thus relevance, of personality itself.

When the dust settled, the questions remained, but the issue of the GFP will undoubtedly be one we will revisit.

Further reading:
ResearchBlogging.orgvan der Linden, D., te Nijenhuis, J., & Bakker, A. (2010). The General Factor of Personality: A meta-analysis of Big Five intercorrelations and a criterion-related validity study Journal of Research in Personality, 44 (3), 315-327 DOI: 10.1016/j.jrp.2010.03.003

The dark side of behaviour at work

(We're reporting from this month's Division of Occupational Psychology conference at the Digest. This post is by Dr Jon Sutton, Managing Editor of The Psychologist, and will also feature in that magazine's March issue. @jonmsutton / @psychmag)

The face that launched a thousand peer-reviewed journal articles beamed down from the stage as self-confessed ‘well adjusted workaholic’ Professor Adrian Furnham (University College London) began his keynote. Quips were in ready supply, but Furnham is much more than a crowd pleaser: this was a talk steeped in history and theory.

According to Furnham, there are 70,000 books in the British Library with leadership in the title. But most leaders don’t succeed, they fail, with a base rate of bad leadership collated from various studies of 50 per cent. This is due to incompetence (not having enough of something, or being promoted beyond the job they are good at), or derailment (having too much of a characteristic, such as self-confidence, or creative quirkiness). It’s this later problem that Furnham focused on, identifying three root causes: troubled relationships, a defective or unstable sense of self; and ineffective responses to change.

Furnham highlighted three fundamental issues. Firstly, organisations ‘select in’, for the traits they think will help an employee be a success, rather than ‘selecting out’ for what is going to cause problems. Secondly, it’s assumed that competencies are linearly related to success. And thirdly, employers fail to see the dark side of bright side traits and the bright side of dark side traits. For example, what if a self-confident leader pursues a risky course of action built on overly optimistic assumptions?

How do we characterise what makes a leader destructive? Furnham feels that the early ‘trait’ approach to leadership failed because ‘the list of traits grew remorselessly, leading to confusion, dispute and little insight’. Trait theory also ignored the role of both subordinates and situational factors. This oversight was rectified in the work of Tim Judge – who Furnham called ‘the best living occupational psychologist’ (see Digest coverage here)– which showed the ‘toxic triangle’ of destructive leaders, susceptible followers and conducive environments. The influence of the model was clear in Furnham’s own consideration of the ‘Icarus syndrome’. High flyers fall through poor selection, flawed personality, no or poor role models, and because they are rewarded for toxicity in the organisation.

Furnham then cantered through some typical personality disorder problems in plain English: arrogance, melodrama, volatility, eccentricity, perfectionism etc. I was struck by the simple, neo-psychoanalytic conception of Karen Horney from 1950: people move away from others, towards them or against them (something covered recently). Furnham outlined some just published research on the differences between private and public sector dark side traits, with private sector more likely to move against others through manipulation or creating dramas whereas public sector managers were more likely to show moving away traits such as withdrawal, doubt, or cynicism.

A series of his own studies, generally with huge samples, elucidated sex differences in dark side traits and their relationships with career choice and success. From all this, Furnham distilled some key implications for selection and recruitment. Consider using ‘dark side’ measures; beware excessive self-confidence and charm; do a proper bio-data and reference check; and get an expert to ‘select out’ for you. As for management, the message was to beware fast-tracking wunderkinds, and to seek a mentor, coach or at least a very stable deputy to keep these individuals on the rails.

‘Just as a good leader can do wonders for any group, organisation or country,’ Furnham concluded, ‘a bad one can lead to doom and destruction. Understanding and developing great leaders is one of the most important things we can do in any organisation.’

ResearchBlogging.orgFurnham, A., Hyde, G., & Trickey, G. (2013). Do your Dark Side Traits Fit? Dysfunctional Personalities in Different Work Sectors Applied Psychology DOI: 10.1111/apps.12002


Further reading:
Timothy A. Judge, Ronald F. Piccolo, Tomek Kosalka, The bright and dark sides of leader traits: A review and theoretical extension of the leader trait paradigm, The Leadership Quarterly, Volume 20, Issue 6, December 2009, Pages 855-875, ISSN 1048-9843, 10.1016/j.leaqua.2009.09.004.
Pdf freely available here