Tuesday, 27 August 2013

How do we get the science of occupational psychology right?

How trustworthy is the scientific literature about psychology in the workplace? In a recent position piece, Sven Kepes and Michael McDaniel note that higher-profile controversies over social priming, practical intelligence, and a range of social psychology effects may reflect wider credibility issues in psychology, including in our neck of the woods, occupational or industrial-organisational (I-O) psychology.

Their case begins with the fact that hypotheses in psychology articles, including I-O ones, are far more likely to be confirmed than rejected, in even higher proportion than that already found within the sciences, and getting higher over time. Why? Perhaps research questions are getting easier to investigate and predict, but a likely factor is the competitive climate of academic journals that is increasingly hungry for significant findings and 'strong theoretical contributions'.

The authors suggest researchers are armed with 'substantial methodological flexibility' that allows them to reach towards the sought-after result. Techniques include collecting data until a significant finding is found, and then stopping; including many similar variables in the design, in the hope that one will deliver; or simply retrofitting hypotheses to whatever pattern of findings the study threw out. As more positive results come out, theories become hard to disprove, especially as a single null result rarely dismisses a positive finding: 'absence of evidence is not evidence of absence'.  Both authors published analyses last year suggesting that a number of I-O meta-analyses were likely to have suffered from publication bias behaviours.

The article's recommendations tally with those more generally made for psychology, but it's worth summarising some of these:

  • An I-O research registry where researchers submit their study plans
  • A two-step approval process for articles, with the initial approval based on study methodology before data is even collected; this opens the door for more publication of (eventual) null results
  • Submission of raw data and syntax of analysis process, allowing verification by peers and reducing temptation to massage data
  • More journal space put aside for null results and replication studies
  • More attention to methodological factors in journals, for example by giving space to re-analysis without outliers or co-variates present
These proposals, and others like them, are the subject of vigorous debate at the moment - see the critique of research registries that Prof Sophie Scott has articulated (with many others behind her), and the various commentaries published following the current article, which we may visit in a later post.

Publications such as the Digests also have a role to play. This year the Occupational Digest reviewed how it could do more to support solid science, not just the sexy stuff: this lead to our Further reading links and increasing coverage of reviews and meta-analyses.

Are we getting the balance right? We'd love to hear what you think.

ResearchBlogging.orgKepes, Sven, & McDaniel, Michael A. (2013). How trustworthy is the scientific literature in I-O psychology? Industrial and Organizational Psychology, 6 (3), 252-268 DOI: 10.1111/iops.12045

Further reading (including review of meta-analyses)

Kepes, S., Banks, G., & Oh, I.-S. (2012b). Avoiding bias in publication bias research: The value of “null” findings. Journal of Business and Psychology, 1-21. DOI: 10.1007/s10869-012-9279-0


  1. I really like the idea of putting aside space for null results. It is rather strange that papers demonstrating how a certain method did not work, often don't get published.

  2. I read about your post but if you will provide this information in detail then it will be good for us. I still want to say thank you for share such great knowledge with us.