Why we need a new science of safety
It is often said that our approach to health and safety has gone mad. But the truth is that it needs to go scientific. Managing risk is ultimately linked to questions of engineering and economics. Can something be made safer? How much will that safety cost? Is it worth that cost?
Decisions under uncertainty can be explained using utility, a concept introduced by Swiss mathematician Daniel Bernoulli 300 years ago, to measure the amount of reward received by an individual. But the element of risk will still be there. And where there is risk, there is risk aversion.
Risk aversion itself is a complex phenomenon, as illustrated by psychologist John W. Atkinson’s 1950s experiment, in which five-year-old children played a game of throwing wooden hoops around pegs, with rewards based on successful throws and the varying distances the children chose to stand from the pegs.
The risk-confident stood a challenging but realistic distance away, but the risk averse children fell into two camps. Either they stood so close to the peg that success was almost guaranteed or, more perplexingly, positioned themselves so far away that failure was almost certain. Thus some risk averse children were choosing to increase, not decrease, their chance of failure.
So clearly high aversion to risk can induce some strange effects. These might be unsafe in the real world, as testified by author Robert Kelsey, who said that during his time as a City trader, “bad fear” in the financial world led to either “paralysis… or nonsensical leaps”. Utility theory predicts a similar effect, akin to panic, in a large organisation if the decision maker’s aversion to risk gets too high. At some point it is not possible to distinguish the benefits of implementing a protection system from those of doing nothing at all.
So when it comes to human lives, how much money should we spend on making them safe? Some people prefer not to think about the question, but those responsible for industrial safety or health services do not have that luxury. They have to ask themselves the question: what benefit is conferred when a safety measure “saves” a person’s life?
The answer is that the saved person is simply left to pursue their life as normal, so the actual benefit is the restoration of that person’s future existence. Since we cannot know how long any particular person is going to live, we do the next best thing and use measured historical averages, as published annually by the Office of National Statistics. The gain in life expectancy that the safety measure brings about can be weighed against the cost of that safety measure using the Judgement value, which mediates the balance using risk-aversion.
The Judgement (J) value is the ratio of the actual expenditure to the maximum reasonable expenditure. A J-value of two suggests that twice as much is being spent as is reasonably justified, while a J-value of 0.5 implies that safety spend could be doubled and still be acceptable. It is a ratio that throws some past safety decisions into sharp relief.
For example, a few years ago energy firm BNFL authorised a nuclear clean-up plant with a J-value of over 100, while at roughly the same time the medical quango NICE was asked to review the economic case for three breast cancer drugs found to have J-values of less than 0.05.
Risky business. Shutterstock
The Government of the time seemed happy to sanction spending on a plant that might just prevent a cancer, but wanted to think long and hard about helping many women actually suffering from the disease. A new and objective science of safety is clearly needed to provide the level playing field that has so far proved elusive.
Putting a price on life
Current safety methods are based on the “value of a prevented fatality” or VPF. It is the maximum amount of money considered reasonable to pay for a safety measure that will reduce by one the expected number of preventable premature deaths in a large population. In 2010, that value was calculated at £1.65m.
This figure simplistically applies equally to a 20-year-old and a 90-year-old, and is in widespread use in the road, rail, nuclear and chemical industries. Some (myself included) argue that the method used to reach this figure is fundamentally flawed.
In the modern industrial world, however, we are all exposed to dangers at work and at home, on the move and at rest. We need to feel safe, and this comes at a cost. The problems and confusions associated with current methods reinforce the urgent need to develop a new science of safety. Not to do so would be too much of a risk.
Philip Thomas, Professor of Risk Management, University of Bristol
This article was originally published on The Conversation. Read the original article.
WorkLife: Why is work making us sick? (Audio)
Worker compensation claims have been decreasing over time but this masks all kinds of problems with our wellbeing at work.
Making our workplaces healthier and safer means we have to confront all those things causing us stress at work. And that’s not going to be solved by standing desks, complimentary massage or lunchtime yoga.
LISTEN NOW > to the ABC RadioNational podcast of ‘WorkLife: Why is work making us sick?”
Workers fight back with deviant behaviour in a precarious workplace: study
PJ Holtum, The University of Queensland
When working conditions are harsh, workers are more likely to find satisfaction through small acts of deviant behaviour instead of banding together or joining a union, my research shows.
I interviewed 30 unskilled workers from five different sites in the greater Brisbane region. The workers came from large, centralised retail, automotive and food wholesaler workplaces and were under strict instruction and surveillance. I asked them about how they manage and organise their shifts.
The people working in these precarious conditions often concealed anxieties or insecurities about the role that work performs in their life. Their insecurities, however, emerged through deviant practices and cynical or apathetic behaviours to work.
Deviant actions involved cutting corners, avoiding paperwork and often avoiding health and safety procedures. Workers operated subtly in order to avoid detection from management.
These activities proved useful to workers because they allowed deadlines and quotas to be met more easily, while simultaneously allowing them avenues to socialise and enjoy aspects of their work day. While workers readily acknowledged deviating from management directives, they also recognised the importance of being perceived as a “valuable” worker.
My analysis suggests that deviant practices were often implemented in order to achieve more existential security at work. Deviant practices were important to workers who felt exhausted, stressed, or who had limited social interaction at work.
Other research shows that merely the threat of precarious employment has negative effects of workers’ health. This can manifest in physical and physiological forms: heightened risk of depression, stress, exhaustion, sleeping disorders, headaches, and high blood pressure.
Workers in precarious environments can also become “urban nomads” as they are stripped of traditional community benefits that come with regular salaried work; benefits such as a sense of community and a loss of work-based identity. It’s the loss of this community that leaves precarious workers not just financially, but socially unstable.
The study suggests that workers were far more likely to game the system rather than slack off. So rather than resist work entirely, workers were resisting the negative and precarious aspects of work.
This resistance allowed workers more social time and benefits they wouldn’t otherwise receive. While this distinction is subtle, it is important; it suggests that work is still a valuable social experience for these workers even though their relationship to it is precariously positioned.
Precarious work can involve any number of environmental uncertainties that arise in work; however, the most significant appear to be a loss of paid leave entitlements and work benefits that occur with temporary employment.
Statistics from the United Kingdom suggest that one in five workers is employed under precarious work conditions. The statistics are much the same in Australia; the Australian Bureau of Statistics (ABS) lists casual or temporary employment at 19% (equivalent to one in five workers).
However, unlike the UK, Australian rates of unionisation are much lower. OECD figures suggest that trade union density (as of 2014) in the UK was 25.1%, compared with Australia’s 15.5%.
This disparity suggests that while Australian workers are just as likely as UK workers to be without secure employment, Australian workers were less likely to have the social support that unions and other organisations can offer.
Although deviant behaviour appears to be a problem for management, it’s important to recognise its social and psychological effects on workers. In an economic space that offers temporary contracts and little to no social support, it seems logical for workers to seek social-security through other avenues in the workplace. Consequently, even small acts of resistance provide valuable mechanisms for employees.
The data from this research suggests that while workers create existential security, they often fail to address the precarious working conditions that give rise to insecure mindsets. So while workers today partake in the ageless ritual of working-class resistance, the absence of collective organisation (like in unions) appears to be particularly problematic.
The effects of precarious work and the construction of insecure workers is particularly important in our global age. Without collective action between workers, the re-integration of unions into the workforce or intervention from national governments, it seems that any localised resistance to precarious work will never be more than just what it is: localised.
PJ Holtum, PhD Candidate in Sociology of Work, The University of Queensland
This article was originally published on The Conversation. Read the original article.
As commuters shimmy past large, lumbering trucks on the road, they may glance over and wonder, “How safe is that driver next to me?” If the truck driver is in poor health, the answer could be: Not very. Commercial truck drivers with three or more medical conditions double to quadruple their chance for being in a crash than healthier drivers, reports a new study led by investigators at the University of Utah School of Medicine in the USA.
The findings suggest that a trucker’s poor health could be a detriment not only to himself but also to others around him. “What these data are telling us is that with decreasing health comes increased crash risk, including crashes that truck drivers could prevent,” says the study’s lead author Matthew Thiese, Ph.D., an assistant professor at the Rocky Mountain Center for Occupational and Environmental Health (RMCOEH). The results were published in the Journal of Occupational and Environmental Medicine.
Keeping healthy can be tough for truck drivers, who typically sit for long hours behind the wheel, deal with poor sleeping conditions, and have a hard time finding nutritious meals on the road. Now, examination of medical records from 49,464 commercial truck drivers finds evidence that their relatively poor health may put them at risk in more ways than one. 34 percent have signs of at least one of several medical conditions that had previously been linked to poor driving performance, from heart disease, to low back pain, to diabetes.
Matching drivers’ medical and crash histories revealed that drivers with at least three of the flagged conditions were more likely to have been involved in a crash. There were 82 truck drivers in the highest risk group, and results were calculated from millions data points reflecting their relative crash risk every day for up to seven years. The investigators found that this group was at higher risk for different categories of crashes, including accidents that caused injury, and that could have been avoided.
The rate of crashes resulting in injury among all truck drivers was 29 per 100 million miles traveled. For drivers with three or more ailments, the frequency increased to 93 per 100 million miles traveled, according to Thiese. The trends held true even after taking into consideration other factors that influence truckers driving abilities such as age and amount of commercial driving experience.
The new findings could mean that one health condition, say diabetes, is manageable but diabetes in combination with high blood pressure and anxiety could substantially increase a driver’s risk.
“Right now, conditions are thought of in isolation,” says Thiese. “There’s no guidance for looking at multiple conditions in concert.” Current commercial motor vehicle guidelines pull truckers with major health concerns from the pool but do not factor in an accumulation of multiple minor symptoms.
Considering that occupants of the other vehicle get hurt in three-quarters of injury crashes involving trucks, it’s in the public interest to continue investigating the issue, says the study’s senior author Kurt Hegmann, M.D., M.P.H., director of RMCOEH. “If we can better understand the interplay between driver health and crash risk, then we can better address safety concerns,” he says.
Story Source: Science Daily
Materials provided by University of Utah Health Sciences. Note: Content may be edited for style and length.
- Matthew S. Thiese, Richard J. Hanowski, Stefanos N. Kales, Richard J. Porter, Gary Moffitt, Nan Hu, Kurt T. Hegmann. Multiple Conditions Increase Preventable Crash Risks Among Truck Drivers in a Cohort Study. Journal of Occupational and Environmental Medicine, 2017; 1 DOI: 10.1097/JOM.0000000000000937