Thursday, 5 April 2012

Is it true that our perception of telephone waiting time depends less on the actual time than other factors?

A customer's experience of dealing with a call centre on the phone can colour their attitude towards the organisation. A recent study claims that customer satisfaction with how long such calls take depends less on call time than on the quality of service we receive, suggesting that companies' focus on an 'ideal call time' may be misplaced. An interesting claim, but is it borne out by the data?

The team, writing in the journal Scientific Research, worked with a call centre to analyse data from 3013 calls: the true call duration and whether the customer felt they received (using a simple yes/no response in each case) good service, sufficient information, and, the key measure, a timely service, termed time satisfaction. An initial analysis showed time satisfaction was correlated quite highly with satisfaction with service and information, and a more modest negative correlation with call duration. The size of the effects weren't directly compared.

A follow-up analysis split the data into four groups based on actual call time; for instance the 'low' group contained calls under two minutes in duration. In every group, a 'yes' for time satisfaction was much more likely to be found alongside yeses for service and information. Meanwhile, the relationship between time satisfaction and actual time was much milder, and in the low time group the effect was too weak to be significant. The authors argue that 'with waiting times being so low, time lost its value and that satisfaction with information and service were more important'.

But wait. Let's imagine that data had been split by call time, but rather than four groups there were many; so many that a single one only contained calls lasting 10m30s to 10m31s. We'd be unsurprised if within that group (and all its counterparts), call time was irrelevant; the range of possible times is so restricted that there is no interesting difference left. The chosen analysis produces a milder version of this 'restriction of range', disproportionately reducing the chances of detecting any effects for actual call time.

The study shows that satisfaction on timely call handling is coloured by factors aside from actual call time, and it's good to remind organisations that perceptions are not formed solely by such objective features. However, this research design doesn't actually put us in a position to rank these different aspects of a call.

ResearchBlogging.orgGarcia, D. (2012). Waiting in Vain: Managing Time and Customer Satisfaction at Call Centers Psychology, 03 (02), 213-216 DOI: 10.4236/psych.2012.32030

Monday, 2 April 2012

Too much focus on 'learning from failure' can make us unhappy


When we fail, how we feel and what we end up learning from it depends upon our coping strategy, according to new research. In particular, focusing exclusively on 'learning from failure' may make us miserable in the process.

This research explored experiences of working scientists, investigator Dean Shepherd and colleagues noting how this domain involves facing disappointing project failures, see e.g. the low rates of success for bringing drugs to market.

The researchers personally contacted employees from institutions in Germany that worked in areas such as pharmacy, zoology and ageing, with 257 scientists ultimately completing surveys consisting of standard and newly developed measures. The team were interested in outcomes from failure: positive, in the form of learning how to better run future projects or how to treat co-workers when their work is floundering, and negative emotional fallout that results in avoiding project team workers or feelings of disappointment. Both are vital, as learning creates organisational knowledge and negative emotions are associated with lower emotional commitment to the organisation – a finding observed within this study.

Learning from a failure was higher when more time had elapsed since the failure itself, suggesting time provides perspective and insight. Learning was also influenced by a respondent's coping strategy or 'orientation': those who affirmatively responded to items such as 'In my mind, I often go over the events leading up to the project's failure' are considered to have a high loss orientation, and these individuals reported higher levels of post-failure learning.

As time elapsed, however, respondents with high loss orientation swung from a low level of negative emotions to a high one, suggesting that healthy reflection gives way to unhelpful rumination. Restoration orientation, a different strategy exemplified by the item 'I keep my mind active, so it does not focus on the loss of the project', is associated with lower levels of negative emotion, but doesn't provide the learning boost provided by a preoccupation with loss. A third strategy of oscillation orientation involves the willingness to actively switch from mindset to the other, giving one's mind a rest before thinking about the project. Employing this strategy led to both more learning and a time-bound decrease in negative emotion.

As important as it is to learn from our mistakes, making this our overriding focus may be counterproductive. The authors advocate giving more space for a restorative approach, accepting that it can be good not to think about failure, and actively switching mindsets to gather insights while improving attitude toward the project over time. Their data also shows that a culture that considers failures as normal, taking it in its stride, leads to lower negative emotions overall, so there are steps that organisations can take as well.

ResearchBlogging.orgShepherd, D., Patzelt, H., & Wolfe, M. (2011). Moving Forward from Project Failure: Negative Emotions, Affective Commitment, and Learning from the Experience The Academy of Management Journal, 54 (6), 1229-1259 DOI: 10.5465/amj.2010.0102

Thursday, 22 March 2012

Job outcomes and experiences suffer when managers regularly work remotely


Technology gives us the option to work in locations beyond conventional offices, both partially - termed teleworking - or as a full-time 'virtual' worker. We now understand that remote workers experience certain challenges such as isolation and less access to resources. But there is scant research on the consequences of a teleworking or virtual manager. Fortunately, a new article gets us up to speed.

Investigators Timothy D Golden and Allan Fromen surveyed over 11,000 employees from a Fortune 500 company based in the US. The online survey asked each respondent to report - for themselves and for their manager - what their work mode was: traditional (in the office full time), teleworking away for a consistent fraction of the work week, or fully virtual. It also measured a host of work experiences and outcomes. Respondents managed by teleworking managers reported receiving less feedback and professional development, a more unbalanced workload and feeling less empowered. A similar negative pattern was found for those with fully virtual managers. The effect sizes were small overall, suggesting this needn't be a make or break issue, but the trend was there.

The authors interpret this in terms of social exchange theory. Working relationships that are partly virtual have less opportunities for rich exchanges, with communications lacking the face-to-face component and fewer obvious opportunities to 'grab a moment', described by social innovator David Engwicht as spontaneous exchanges. Interactions are likely to be more task-focused and obligatory, as email is more onerous to produce when compared to a quick coffee or moment in the corridor. And professional development and mentoring becomes similarly laborious, always a dangerous place for any 'important to do' but non-urgent activity to be.

How about those respondents who themselves worked remotely? The data suggests they have a similar experience regardless of their manager's work mode. The authors had predicted this group would experience better conditions when their manager also worked non-traditionally: they would both experience comparable challenges and make efforts to find mutually productive outcomes. But in reality, higher scores on the outcome variables were only found in a few instances and were extremely small. This suggests that if you don't share physical space with your manager, it doesn't matter much where they happen to be.

It's worth noting that in the US, rates of teleworking dropped between 2008 and 2010. Perhaps organisations and individuals have begun to appreciate that the attractions of remote working are tempered by modest but genuine drawbacks.

ResearchBlogging.orgGolden, T., & Fromen, A. (2011). Does it matter where your manager works? Comparing managerial work mode (traditional, telework, virtual) across subordinate work experiences and outcomes Human Relations, 64 (11), 1451-1475 DOI: 10.1177/0018726711418387

Friday, 16 March 2012

Leader evaluations draw on different racial stereotypes depending on leadership performance

We may think of stereotypes as fixed entities, but research suggests they are applied under certain conditions, often to make sense of situations. A new article applies this theory of 'goal based stereotyping' to leadership, specifically the stereotype that 'black' people (the term used in the article) possess less leadership competence, in terms of qualities like intelligence, determination, or decisiveness. When a black leader performs poorly, the incompetence stereotype can be applied to easily explain the situation. It's less useful – and hence less likely to be used - when a black leader succeeds. Instead other, positive stereotypes can come into play, such as the stereotype that black people are especially warm or have 'survival instincts'. The success of the leader is justified on the basis of such qualities, seen as handy in some contexts but essentially compensatory, rather than core to the critical characteristic of leadership competence.

This argument was put to the test by investigators Andrew Carton and Ashleigh Rosette using a very specific example of leadership: US college football quarterbacks. These, considered leaders in their field*, are repeatedly publicly evaluated by the media, making possible an archival study which looking at weekly reports from newspapers on the games of top-league university teams in 2007. The study focused on accounts involving each team's key quarterback; 31 of these were black and 82 white. Coders - blind to the purpose of the study - sought out 'evaluative phrases', where adjectives or adverbs were applied to the quarterback or his actions. These were then coded according to their valence (positive or negative) and meaning: competence statements were those that referenced the relevant leadership qualities such as intelligence or decisiveness, whereas compensatory statements were those that referenced a specific non-leadership quality that could have a bearing on quarterback performance: athleticism, which is a positive stereotype associated with black people.

How did race influence quarterback evaluation? As per predictions, it depended on performance – in this case, match outcome. When black quarterbacks suffered a loss, they were more likely to be painted as an incompetent leader than a white peer, but there was no difference on this measure when they won. The exact opposite pattern was found with athleticism, which was attributed to black quarterbacks more frequently, but only when they won.  Each stereotype leapt out as needed.

Carton and Rosette point out the obvious lesson for organisations: "success may not be credited to the leadership ability of blacks, but instead to attributes that are perceived to compensate for incompetence." They suggest vigilance in identifying compensatory stereotyping and combating it through challenging broad stereotypes (for instance, through examples of successful black leadership) and encouraging specific black leaders to circulate 'individuating' information such as track record and skill sets, in order to contextualise their endeavours and makes it less useful to reach for the sense-making, broad brush explanation.

ResearchBlogging.orgCarton, A., & Rosette, A. (2011). Explaining Bias against Black Leaders: Integrating Theory on Information Processing and Goal-Based Stereotyping The Academy of Management Journal, 54 (6), 1141-1158 DOI: 10.5465/amj.2009.0745

Monday, 12 March 2012

Be all you can be: how military training affects personality


Military training intends to change behaviour, drilling the military way into new recruits, and providing incentives for sticking firmly to it. But how enduring are its effects? A recent study suggests that we may exaggerate the degree to which the military 'makes the man' (in this case), but that there are influences that endure well into the labour market.

Joshua Jackson of Washington University and a team from the University of Tubingen studied young German men performing their 9 months of military national service (3 in training, 6 on a post), measuring their personality both before training and two years after. A large control group was available thanks to the proportion of German citizens who conscientiously object to military service, opting for civilian duties over the same time period. Those who opt out of the army may differ in terms of personality, so the authors used a smart matching procedure, pairing up budding soldiers with one or two civilians who were similar in terms of personality. This created two comparable samples, matched on pre-training personality, of 241 (soldier) and 628 (civilian) participants.

All participants showed some shifts in personality over time, becoming less neurotic, more conscientious and more agreeable. These trends have been identified elsewhere as a feature of young adulthood, and are often construed as a developing maturity: coping better with setbacks, being more organised and accountable, and having more generosity of spirit toward others. The groups differed in one way only: the effect of increasing agreeableness was one third larger for the civilian than the military group.* This suggests that military training attenuates the upward trajectory of agreeableness seen in early adulthood.

A subset of participants were contacted on two further occasions, two years apart, giving 4 data points with which to examine this trajectory more fully. Across the six years, agreeableness increased year on year for the civilian group in a fairly linear fashion. The military group showed a steady increase, but it was extremely weak (from eyeballing the data, it looks as if the agreeableness increase in this smaller sample may not even be significant, but this isn't directly reported). Agreebleness steadily ticks upwards in young adult years, unless participants undergo military experiences, in which case they see smaller or no changes to this personality trait, with no 'late blooming' of agreeableness to catch them up later.

Research on personality change can be challenging, not least because personality traits tend to be highly consistent. This study's matching procedure enabled it to identify how military experience seems to cause a deviation from the young adult trajectory of growing agreeableness over time. Lower agreeableness matters: it is associated with conflict in relationships and aggression, although it has also been associated with greater occupational attainment. At least as interesting for me, however, are the similarities of change of other traits across both groups. As the authors put it, "the maturation often attributed to military training...may actually be best ascribed to the specific time period of young adulthood."

*Cohen's d of .32 vs .21, both significant to p<.05

ResearchBlogging.orgJackson, J., Thoemmes, F., Jonkmann, K., Ludtke, O., & Trautwein, U. (2012). Military Training and Personality Trait Development: Does the Military Make the Man, or Does the Man Make the Military? Psychological Science DOI: 10.1177/0956797611423545

Tuesday, 6 March 2012

Sleep less and waste more time online: the temptations of cyberloafing


Cyberloafing is when work time is frittered away on an unrelated online activity, whether it be web comics, perusing news sites or watching the 1982 snooker championship final. A new article suggests that we may be more prone to it when we haven't had enough sleep. Its authors, led by David Wagner, began sifting through Google's publically available data for rates of Entertainment-related searches, judged to be a reasonable proxy of cyberloafing. But how can anonymous data shed light on an issue involving sleeping habits?

The investigators recognised an event that affects everyone's sleep: when the clocks go forward for Daylight Saving Time. Prior evidence suggests we lose on average 40 minutes of sleep per night following the switch, as our body rhythms struggle to adjust. (Exploiting a fixed phenomena is an example of a quasi-experiment; another would be the hurricane that occurred within this study on emotional hangovers.) The researchers used data from 203 metropolitan areas in the USA, weighted by area size, across 2004-2009. They found that Entertainment-related searches on the Monday after DST were 3.1% more prevalent than the previous Monday, and 6.4% than the subsequent Monday . It's worth noting that the data isn't segmented by work and leisure hours, so the effect includes extra surfing that might occur later at night, when people are still feeling awake; however, the bulk of online activity occurs during working hours.

A second study took this to controlled lab conditions. 96 undergraduate students wore a sleep monitoring bracelet overnight before attending a lab session to complete a computer task - assessing a potential new professor for the university by watching a 42 minute video lecture. What the researchers were really interested in was the amount of time they would spend surfing the internet instead. Cyberloafing was higher for participants who experienced more instances of sleep interruption or less sleep overall, as recorded by their monitoring bracelet.

This is another piece of research advancing the ego depletion theory of why we fail to effectively regulate behaviour. This states that willpower is a resource that is used up through effortful acts, leaving us susceptible to temptation or laziness. Researchers have previously argued that sleep is a means of recharging our regulatory resources, and these studies confirm that less sleep does indeed make us prey to counterproductive activities like cyberloafing. However, those who naturally exercise self-discipline may be somewhat resistant: in study two, the effect of sleep interruption on cyberloafing was weaker for participants who scored high on a measure of conscientiousness administered beforehand. (The effect of less overall sleep still remained.) This is consistent with ego depletion, as highly conscientious types are more likely to actively use methods to regulate their effort to overcome counterproductive behaviours, rather than taking the path of least resistance.

The costs of cyberloafing have been estimated at around £300m a year, so it's worth understanding when we're more vulnerable to its temptations;  UK employers should remember this when our clocks go forward on the 25th of this month. Aware of its power, I've included only one extraneous, non-work related link in the above text, and it's a niche one at that. But if you're a classic snooker fan with a tricky deadline, I'm so sorry. Just think about all the time I wasted considering the alternatives.

ResearchBlogging.org Wagner, D., Barnes, C., Lim, V., & Ferris, D. (2012). Lost Sleep and Cyberloafing: Evidence From the Laboratory and a Daylight Saving Time Quasi-Experiment. Journal of Applied Psychology DOI: 10.1037/a0027557

Thursday, 1 March 2012

How does clear specific feedback affect candidates who fail tests?

Online tests for recruitment are widely used, and routinely followed up by specific feedback to applicants, in order to communicate decisions, emphasise the pedigree of the process to forestall complaints, and to benefit the candidate. But does it deliver on all these fronts, particularly when candidates have failed to meet the required threshold?

Sonja Schinkel and colleagues explored this through two studies. The first asked 81 university students to put themselves into a hypothetical job application process and attempt two ability tests, drawn from a well-established measure of general mental ability. All participants were then told they were 'rejected' due to scoring worse than the top 20% of test-takers. They then answered questions about how fair they felt the outcome was, and provided a second set of well-being evaluations (the first taken before the test as a control variable for analyses). How did appearing to fail the test make them feel?

Participants were happier when they felt the outcome was ultimately fair... unless they possessed an 'optimistic attributional style', measured before the test with items like 'what do you think when bad things happen to you?'. Why was this? This style involves attributing negative events to external, impermanent factors, and that attitude can help you dismiss a disappointment as just bad luck. But this buffer to well-being is eroded if you accept that an outcome is fair, owing something to internal and more enduring factors.

A second experiment with 244 participants replicated this finding, and extended it by contrasting the non-specific test feedback (you didn't make the cut-off) with false, specific feedback (this is where you scored). Such specific feedback was worse for the well-being of all participants. Moreover, optimists in this condition didn't enjoy the well-being buffer when they judged the outcome was unfair. It's as if the specific feedback unavoidably presents a jarring internal attribution that can't be explained away.

Experiencing a negative event, such as rejection, is unwelcome. Being able to attribute the event to external causes can lighten its emotional impact, but these studies demonstrate how many of the features of ability test feedback – emphasising the fairness of outcome through reference to psychometric properties, specificity of feedback including ranges of performance – impose internal attributions, and lead well-being to suffer, at least in the short term. Whether the self-insight gained outweighs the self-efficacy lost is a calculation left to another day.

ResearchBlogging.orgSchinkel, S., van Dierendonck, D., van Vianen, A., & Ryan, A. (2011). Applicant Reactions to Rejection Journal of Personnel Psychology, 10 (4), 146-156 DOI: 10.1027/1866-5888/a000047