Incident reporting in complexity (Staender)

Stimulated by direct contact with National Aeronautics and Space Administration (NASA) safety experts and based on work in Australia, the first widely accessible anonymous critical incident reporting system (CIRS) in Europe was established at the Department of Anaesthesiology at the University of Basel in 1995.344,345 The focus was primarily on discovering weak points in the anaesthesia system, learning from them and thus becoming safer. As a result, CIRS has developed far beyond Basel, beyond Switzerland and far beyond the field of anaesthesia. Experience with CIRS in Europe has been the inspiration for a whole range of national incident reporting systems, such as in England and Spain. Incident analyses today complement the classical accident investigations in morbidity and mortality conferences and can thus uncover multiple factors that have potentially contributed to an event; providing insights that can be used to prevent an identical event in the future (so-called find and fix approach). With this impact, incident reporting systems have also found their way into the recommendations of the Helsinki Declaration on Patient Safety in Anaesthesiology and are now operated both locally and nationally in a number of countries (including Denmark, Finland, England, Spain, Germany, Switzerland etc.).346

This ‘find and fix’ approach is based on the consideration of avoiding errors and thus generating safety. This includes the traditional definition of safety, where a state is described as safe when no errors occur. This definition is not unproblematic because it considers safety to be a ‘dynamic nonevent’.347 In view of the more or less constant rates of avoidable harm over time, the question must be asked whether this traditional definition of safety, and thus the ‘find and fix’ approach is still entirely fit for purpose in ever increasingly complex healthcare systems.348

Complexity as a challenge

In previous decades, health care was far less complex than today, and many processes were linear, many processes could be defined in cause–effect relationships. Today, not only has the knowledge base become far more complex (e.g. with an unmanageable number of clinical guidelines), but also our organisational structures and their interfaces, our patients (multimorbidity) and our therapies (polypharmacotherapy) have become more complex too.

In error causality, we have traditionally assumed a majority of linear relationships. This thinking was established in Heinrich’s so-called Domino Model (or also Accident Causation Model) from 1931 and was based on simple cause–effect relationships.349 These concepts were propagated in the Swiss Cheese Model and later in the Threat and Error Model, both by Reason.350

Complex systems, however, are characterised by a multitude of components and in particular by a high number of interrelationships; that is complex systems are no longer linear. This means that they also elude a linear approach to analysis and are therefore on the one hand very difficult to control, and on the other hand also carry a great risk if they get out of control. With regard to medicine and safety thinking in our discipline, this means that we succumb to an illusion when we believe that we can make a complex system safer with simple, linear process descriptions, rules and regulations and a ‘find and fix’ approach (classical incident reporting concept).

Complex systems, such as modern health care, are today largely dependent on well trained experts being able to interpret a new and previously unknown constellation of factors on the basis of their knowledge and to adapt previous behaviour on the basis of their expertise. This behaviour has been known in industry since the introduction of the concept of resilience (resistance to interference).347 It is based on the findings that there is a clear difference between work-as-imagined and work-as-done. The term textbook performance can also be found in resilience literature, which today is no longer sufficient to deal with imponderables, because the textbook may be incomplete, overly limited or simply outdated because we face change constantly in our working conditions with new requirements, pressures or even threats. Textbook performance only works if the environmental factors are completely known and stable; but this condition can no longer be assumed in today’s sociotechnical systems state of flux.351

The resilience of a system is now characterised by the understanding that changing environmental conditions are managed while the system nevertheless still continues to function to a greater or lesser degree. This is achieved by the following:

  • (1) Buffer capacity: the size or extent of disturbances a system can tolerate without collapsing.
  • (2) Flexibility: the ability of a system to restructure itself in response to external pressure.
  • (3) Tolerance: the knowledge of how a system behaves at its performance limits, that is, whether it degrades slowly under pressure or collapses rapidly as soon as the pressure exceeds its adaptive capacities.351

In addition, individual behaviour can also show characteristics of resilience. Johnson and Lane352 defined the so-called C terms for resilient behaviour (Table 17).

Table 17: Individual behaviour showing characteristics of resilience (352)

Incident reporting systems under Safety-I and Safety-II

When thinking about safety, this means that we must not rely solely on process descriptions, we must not only learn from incidents, mistakes and accidents in the past, and we must not ignore the daily fluctuations in performance. This new safety thinking is today referred to as Safety II, in contrast to Safety I.353

So, in the future we should use a new definition of safety that moves away from avoiding things going wrong to making sure everything goes right.353 We can no longer rely on our systems to work well only because we prevent errors. We also need to know why our systems work well every day. Accordingly, in the future we will have to spend much more time understanding how professionals cope with ever-changing daily challenges and still deliver excellent results; what adaptations are being made and what they have achieved.354

That is, we should not only look at what ‘went wrong’, but also at what goes well. Everyday life is often successful because people do their best job in the workplace and make sensitive decisions, make adjustments according to the requirements of the moment, to cope with the situation. Understanding these adjustments and learning from them is at least as important as uncovering the causes of adverse events.

With regard to incident reporting systems, we can continue to use these proven instruments of patient safety in the future; but we should expand these incident reporting systems with instruments that help us to learn from everyday practice. Incident reporting systems should therefore be systematically extended by the factor learning from success by encouraging employees to also report successful solutions to difficult, unexpected situations.

In addition to incident reporting, regular de-briefings and so-called safety walk around could be introduced. These Safety or Leadership-Walk-Around are used to collect suggestions for improvement from employees at the grassroots level and are an integral part of Lean-Management Systems.355,356 A recent review article summarises the advantages and disadvantages of these methods and gives useful recommendations for the concrete implementation of the Safety Walk Around.357 Adequate resources are needed, and if these resources are not granted because of efficiency reasons, improvements in our hospital systems may not materialise. In aviation it is said: If you think safety is expensive, try an accidentFaster, better and cheaper was a NASA quote leading to several major accidents and disasters (e.g. Columbia Accident Report or the Mars Climate Orbiter Mishap Investigation Report).358

Safety-II is also important in the context of the ever-increasing production pressure in the healthcare sector: this production pressure is increasingly straining the resilience of our systems and thus potentially endangers patient safety. The pressure to achieve given annual targets (e.g. case numbers etc.) to obtain sufficient revenues despite insufficient financing leads to the situation that the word ‘safety’ is too often omitted when communicating such target agreements to the chief physicians – hospital managers and even medical staff appear more preoccupied with survival in the marketplace than with survival of their patients.359

Please visit the following link to see this article in full on the EJA website