Extending the conversation about evidence in psychology, the DOP conference held several sessions looking at this exactly this issue as it pertains to an area close to the occupational domain: coaching.
A discussion session led by Rob Briner asked the simple question: does executive coaching work? As a whole, the field contains many claims about its effects, which Briner demonstrated using choice quotes taken from public websites. Coaching can apparently make you more effective at work, help you lead your team better, help resolve interpersonal tensions, uncover strengths and help overcome weaknesses. Yet, as became clear during this workshop, very few practitioners have the evidence at their fingertips to support such claims. And as we will see, this is because such evidence is scant.
Another challenge to a truly evidence-based coaching is that coaching enterprises are often entered into without a clear definition of what success would look like. When the problem is extremely ill-defined, applying a 'treatment' (in the scientific sense) and then seizing on any changes as proof of this treatment being effective is problematic. That said, Briner made it clear that evidence-based does not mean 'randomised control trial or nothing', and outlined four types of evidence that matter:
- Practitioner experience
- Local context
- Evaluation of research
- Opinions of those affected
There is value to all of these, and in some situations (for example when dealing with new approaches or revisions to these) it isn't reasonable to expect an activity to have peer-reviewed data behind it. But coaching is hardly new, so we should be moving to a position where the core claims are in some way validated.
The subsequent day's symposium on evidence in coaching was therefore well-timed. Amongst some presentations of interesting but specific investigations, Yi-Ling Lai presented her PhD research comprising a systematic review of the evidence on coaching. As these reviews should - and we'll look at this more thoroughly in a future digest - it followed a highly systematic process to identify what should and shouldn't be included in this study. The danger with non-systematic reviews is that of cherry picking data to fit a narrative (intentionally or no), and this analysis avoided this issue, and presented what appears to be a reliable account of the current state of the field.
The review was highly useful in understanding what practitioners felt mattered most in coaching relationships - factors such as emotional support and trust, and the overall quality of the coaching relationship, rather than merely the content of the coaching sessions, was believed to be key. However, if anything, the review underlined the fact that we are still missing an adequate amount of hypothesis-driven research on effects and outcomes, at least of the sort that would support the claims that are frequently made about coaching. In line with the consensus that the previous day's discussion led to, the take-away is to talk about and market coaching activities in a way that you feel you can defend, using the four forms of evidence and being honest about unknowns.
The effect of coaching isn't understood in the way that aspirin is, nor is it likely to ever be. This isn't a problem, up until we make it one.
Further reading:

(direct pdf link available here)
This comment has been removed by a blog administrator.
ReplyDelete