December 16, 2018

A syllabus for Resilient Health Care

Resilient Health Care (RHC) is an important part of the teaching on my Masters module Improving Safety & Quality in Healthcare (MH940). There is now a growing interest in Safety-II and RHC, but different people / courses approach the subject in different ways depending on personal preference and experience with the subject. As opposed to traditional safety engineering or even more recently patient safety, there is no syllabus for RHC.

The WHO has developed a patient safety curriculum, which is a large international effort, and which is a very useful resource for educators. However, questions remain about the extent to which this is compatible with thinking in RHC, and whether RHC might make a useful contribution or provide an alternative vision for teaching patient safety.

With colleagues, I am currently thinking about this issue, and we are developing a (constructive) critique of the WHO patient safety curriculum from a Safety-II perspective. Subsequently, we will undertake a Delphi consenus development exercise with participants from the RHC community in order to identify learning outcomes, topics, and teaching approaches for a RHC syllabus.


April 05, 2018

Safety assurance of machine learning and autonomous systems

I have been reading some papers and reports about the challenges of assuring safety of machine learning systems and autonomous systems.

Safety assurance typically makes the case that a system is acceptably safe to operate in a particular context. The demonstration that this holds is based on an argument and corresponding evidence that all relevant hazards have been identified, that the risks associated with these hazards have been adequately conrolled, that the overall residual risk is acceptable, and that the evidence to back these claims is sufficiently trustworthy. There is an underlying assumption that (a) the system has a predictable behaviour, and (b) the operating context is reasonably well defined and stable.

Machine learning approaches have a number of well-known issues that affect the quality of the resulting systems. Among these are the problem of ensuring that the training data set is sufficiently representative of the real world data, and that the system is able to generalise from the training data to actual data (i.e. avoiding overfitting to the training data).

In terms of safety assurance of machine learning and autonomous systems there are challenges that result from the adaptive (and therefore to a certain extent unpredictable) behaviour of such systems, and from the above quality issues. There is also a challenge to the assumption that the context is reasonably well understood, because autonomous systems might be deployed in specifically in changing and dynamic contexts. A safety argument for machine learning / autonomous systems needs to be able to demonstrate that this does not impact on safety.

At present there appear to be no actual solutions, but there is a lot of research activity going on.

References:

[1] Faria, JM. Machine Learning Safety: An Overview. Safety-Critical Systems Symposium 2018

[2] Burton, S., Gauerhof, L., Heinzemann, C. Making the case for machine learning in highly automated driving. Safecomp 2017


November 26, 2017

Kellogg et al on the problems with the learning created from root cause analysis

Follow-up to New (Safety–II) directions for organisational learning in healthcare from Mark's blog on safety research and teaching

I read Kathryn Kellogg and colleagues' paper on Our current approach to root cause analysis: is it contributing to our failure to improve patient safety?

In the paper the authors describe their findings of the analysis of solutions generated from RCAs at one hospital over an 8-year period. They argue that the most common solutions proposed by the RCA teams are also the most ineffective ones - training, policy reinformcement and disciplining. They provide nice examples (awful in terms of the learning generated) from the RCA reports about how investigation teams are stuck in a work-as-imagined frame of mind believing that their protocols are working effectively, but human errors undermine them.

It is precisely this failure to critically reflect on their assumptions that hinders much progress in patient safety (no double-loop learning). Investigators of safety incidents should study work-as-done, trying to understand the mismatches between protocols and actual practice.

Then, of course, the next step is to abandon the belief that learning needs to focus on what went wrong, and start appreciating the learning that could be harnessed by looking at how people make things work on a daily basis - Safety-II in action.

References:

KELLOGG, K. M., HETTINGER, Z., SHAH, M., WEARS, R. L., SELLERS, C. R., SQUIRES, M. & FAIRBANKS, R. J. 2017. Our current approach to root cause analysis: is it contributing to our failure to improve patient safety? BMJ Quality & Safety, 26, 381-387.


November 23, 2017

New (Safety–II) directions for organisational learning in healthcare

I am currently writing a commentary for International Journal of Health Policy & Management on Russel Mannion and Jeffrey Braithwaite's editorial "False Dawns and New Horizons in Patient Safety Research and Practice" (1). Russel and Jeffrey provide an insightful critique of traditional patient safety improvement efforts, and provide a powerful alternative vision based on Safety-II thinking.

In my commentary I apply the Safety-II perspective to organisational learning in healthcare organisations. My key argument is that healthcare organisations have been struggling to learn from experience, because they are concerned only with incidents and adverse events - the extraordinary catastrophe (2). What these approaches fail to appreciate is the role of performance variability and the manifold adaptations by healthcare workers who prevent daily disruptions and tension from turning into daily disasters (3). In my opinion organisational learning should be concerned just as much with the ordinary everday clincal work (work-as-done) as with the extraordinary failures (4,5). A corollary that follows from this is that organisational learning should be democratic and encompas frontline communities of practice, rather than being the remit of a central risk management facility (6).

References:

[1] Mannion, R. and Braithwaite, J., 2017. False dawns and new horizons in patient safety research and practice. International Journal of Health Policy and Management.

[2] Sujan, M.A., Pozzi, S. and Valbonesi, C., 2016. Reporting and learning: from extraordinary to ordinary. Resilient Health Care, Volume 3: Reconciling Work-as-Imagined and Work-as-Done, p.103.

[3] Sujan, M., Spurgeon, P. and Cooke, M., 2015. The role of dynamic trade-offs in creating safety—A qualitative study of handover across care boundaries in emergency care. Reliability Engineering & System Safety, 141, pp.54-62.

[4] Sujan, M.A., Huang, H. and Braithwaite, J., 2016. Learning from incidents in health care: Critique from a Safety-II perspective. Safety Science, 99, pp115-121.

[5] Sujan, M. and Furniss, D., 2015. Organisational reporting and learning systems: Innovating inside and outside of the box. Clinical Risk, 21(1), pp.7-12.

[6] Sujan, M., 2015. An organisation without a memory: a qualitative study of hospital staff perceptions on reporting and organisational learning for patient safety. Reliability Engineering & System Safety, 144, pp.45-52.


April 2024

Mo Tu We Th Fr Sa Su
Mar |  Today  |
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30               

Search this blog

Tags

Galleries

Blog archive

Loading…
RSS2.0 Atom
Not signed in
Sign in

Powered by BlogBuilder
© MMXXIV