(We're reporting from this month's Division of Occupational Psychology conference at the Digest. This post is from regular editor Alex Fradera, and the report will also feature in March issue of The Psychologist magazine.)
The general factor of personality, or GFP, is analoguous to g, the intelligence quotient that predicts to differing degrees the multiple intelligences - verbal, musical, numerical -that sit below it. (The symposium reminded us that whereas Spearman posited g in the 1900s, and Thurstone the differential intelligence model in the twenties, it took until the 1950s for Phillip Vernon to reconcile the models).
While practitioners who use personality emphasise its differential qualities - many facets, no one right profile - the academics who advocate GFP say that on the contrary, there is such a thing as having lots of personality, and that this global factor is meaningful, predicting a range of life outcomes. Critics say this may be down to statistical artefacts, such as an individual's desire for social desirability influencing all their questionnaire responses. So this symposium took us into the science, and particularly what it means for practitioners.
The first session, given by Rainer Kurz of Saville Consulting, was the most technical in focus, introducing a way to get a GFP simply by summing raw scores on each Big Five personality measure. It's an intuitive approach that in his dataset of 308 mixed roles proved as valid in predicting job performance as the standard approach (extracting the 'first unrotated principle component') while avoiding some fiddly statistical issues. However, the GFP was not comprehensive, as after partialling out its variance he found significant influences of personality subcomponents remained, notably assertiveness and achievement. This suggests the add-up method doesn't quite account for their influence. He concluded that this was a promising recipe but the approach will take refining.
His colleague Rob McIver chose to put aside notions of 'the ideal GFP' to explore total personality scores that predict success on a particular capability-set - in most cases, a job. Rather than relying just on factor extraction or the add-it-all-up approach, this starts by developing and shaping tests to pre-fit the criterion you care about.
McIver's data drew on external raters who had judged various facets of workplace effectiveness for the same individuals described by Kurz in his earlier presentation. The individuals had also completed seven different personality tests, and McIver explained how he generated a total personality score for each one using a criterion approach: personality dimensions were mapped onto effectiveness based on logic and previously reported relationships, meaning some dimensions were weighted heavily and others not at all if judged irrelevant to effectiveness. McIver showed how their own questionnaire, developed from the ground up around these effectiveness factors, produced the most powerfully predictive total scores, with an r up to .32.
McIver went further, producing a personality super-score for each participant by totalling all seven tests together. Would it work, given that many of these questionnaires were not developed with this effectiveness framework in mind? It turns out that united they stand, pretty well, with a validity of .27, thanks in some part to the criterion-based pruning and weighing. McIver concluded that this approach may be more profitable than searching for one true GFP.
Between these two talks Rob Bailey of OPP took the floor to question whether, in any case, true GFPs could truly be useful for practitioners. He points out that the literature tends to describe the general factor as reflecting people who are relaxed, sociable, emotionally intelligent, satisfied with life, and altruistic - and that a low score means the opposite of these things. He challenged the symposium to imagine cases where such information could be provided to an individual in any constructive fashion, compared to the conventional profiling approach.
Bailey then went to the data, in this case taken from over 1,200 individuals paid to complete a 16-factor personality questionnaire, the absence of career implications giving them little incentive to 'fake nice' and apply spin to their results. His component analysis suggested the personality data could reduce to two factors, not fewer, and he showed how opting to use the dual factors rather than the 16 original ones weakened the ability to predict variables such as job satisfaction, dropping coverage from 9.3% of the variance to 7.5%.
He concluded that granularity, not fat factors, may be a better bet for predictive power, and also cautioned that the differences he finds (no single factor, more value in the parts than a whole) may result from using a personality measure that isn't built to the specifications of the Big Five, and that in fact dependence on that model may be under-valuing the diversity, and thus relevance, of personality itself.
When the dust settled, the questions remained, but the issue of the GFP will undoubtedly be one we will revisit.