5 Global Safety

From OHS BOK
Jump to: navigation, search

Synopsis of the OHS Body Of Knowledge

Background A defined body of knowledge is required as a basis for professional certification and for accreditation of education programs giving entry to a profession. The lack of such a body of knowledge for OHS professionals was identified in reviews of OHS legislation and OHS education in Australia. After a 2009 scoping study, WorkSafe Victoria provided funding to support a national project to develop and implement a core body of knowledge for generalist OHS professionals in Australia. Development The process of developing and structuring the main content of this document was managed by a Technical Panel with representation from Victorian universities that teach OHS and from the Safety Institute of Australia, which is the main professional body for generalist OHS professionals in Australia. The Panel developed an initial conceptual framework which was then amended in accord with feedback received from OHS tertiary-level educators throughout Australia and the wider OHS profession. Specialist authors were invited to contribute chapters, which were then subjected to peer review and editing. It is anticipated that the resultant OHS Body of Knowledge will in future be regularly amended and updated as people use it and as the evidence base expands. Conceptual structure The OHS Body of Knowledge takes a ‘conceptual’ approach. As concepts are abstract, the OHS professional needs to organise the concepts into a framework in order to solve a problem. The overall framework used to structure the OHS Body of Knowledge is that:

Work impacts on the safety and health of humans who work in organisations. Organisations are influenced by the socio-political context. Organisations may be considered a system which may contain hazards which must be under control to minimise risk. This can be achieved by understanding models causation for safety and for health which will result in improvement in the safety and health of people at work. The OHS professional applies professional practice to influence the organisation to being about this improvement. �This can be represented as:


Audience The OHS Body of Knowledge provides a basis for accreditation of OHS professional education programs and certification of individual OHS professionals. It provides guidance for OHS educators in course development, and for OHS professionals and professional bodies in developing continuing professional development activities. Also, OHS regulators, employers and recruiters may find it useful for benchmarking OHS professional practice. Application Importantly, the OHS Body of Knowledge is neither a textbook nor a curriculum; rather it describes the key concepts, core theories and related evidence that should be shared by Australian generalist OHS professionals. This knowledge will be gained through a combination of education and experience. Accessing and using the OHS Body of Knowledge for generalist OHS professionals The OHS Body of Knowledge is published electronically. Each chapter can be downloaded separately. However users are advised to read the Introduction, which provides background to the information in individual chapters. They should also note the copyright requirements and the disclaimer before using or acting on the information.

� Global Concept: Safety Professor Sidney Dekker PhD Director, Key Centre for Ethics, Law Justice and Government, Griffith University Email: s.dekker@griffith.edu.au

Sidney gained his PhD in Cognitive Systems Engineering from Ohio State University. He has been Professor at Lund University, Sweden; a Senior Fellow at Nanyang Technological University, Singapore; and Visiting Academic in the Department of Epidemiology and Preventive Medicine, Monash University. His research interests include system safety, human error, reactions to failure and organisational resilience, and he has authored several books on these topics.


Core Body of Knowledge for the Generalist OHS Professional Global Concept: Safety Professor Sidney Dekker PhD Director, Key Centre for Ethics, Law Justice and Government, Griffith University Email: s.dekker@griffith.edu.au

Sidney gained his PhD in Cognitive Systems Engineering from Ohio State University. He has been Professor at Lund University, Sweden; a Senior Fellow at Nanyang Technological University, Singapore; and Visiting Academic in the Department of Epidemiology and Preventive Medicine, Monash University. His research interests include system safety, human error, reactions to failure and organisational resilience, and he has authored several books on these topics.

Core Body of Knowledge for the Generalist OHS Professional


Global Concept: Safety


Abstract

This chapter discusses workplace safety by considering four questions: Is human error a cause of workplace accidents or a consequence of trouble deep within an organisation?; Is ensuring compliance with rules and procedures a sufficient or limited approach to safety?; Is safety more appropriately conceptualised as an absence of negatives or the presence of certain capabilities?; and Is workplace safety best addressed at the level of components or systems? Reference is made to relevant literature underpinning the polar positions inherent in these questions. What qualifies as ‘best practice’ in safety may depend on where people or organisations position themselves (implicitly or explicitly) between these opposing viewpoints. OHS needs to be based on an understanding that not all workplace safety is created similarly. Being open to fresh perspectives is critical to fostering diversity of viewpoints and methods.


Key words safety, human error, organisations, work, system �Contents Synopsis of the OHS Body Of Knowledge 5 Background 5 Development 5 Conceptual structure 5 Audience 6 Application 6 Accessing and using the OHS Body of Knowledge for generalist OHS professionals 6 1 Introduction 1 2 Is human error a cause or consequence? 1 3 Is compliance with rules a sufficient or limited approach? 3 4 Is safety best conceptualised as absence of negatives or presence of capabilities? 5 5 Is safety best addressed at component or system level? 7 6 Summary 8 7 References 9�1 Introduction Safety is a large topic that resists simple definition. This chapter briefly explores past and present ideas about safety, particularly as they relate to OHS. It does so by considering four questions that refer to opposing points of view. Firstly, is human error a cause of workplace accidents or a consequence of trouble deep within an organisation? Secondly, is compliance with rules and procedures a sufficient determinant of workplace safety, or is its role severely limited? Thirdly, is safety more appropriately conceptualised as an absence of negative things or the presence of certain capabilities? Finally, should we view safety as a matter of getting rid of broken components or as a quality that emerges from the complexity of an organisational system?

Reduction of a history of ideas to a set of opposites is, of course, an oversimplification. However, it is an approach that facilitates access to the topic, and perspectives on the opposing points of view inherent in the questions are underpinned by a substantial body of literature. Furthermore, ‘best practice’ in safety can be identified based on where people or organisations position themselves (implicitly or explicitly) in the space between any of these opposing points of view. Positions taken, which can be read from the language that stakeholders use to describe safety problems, have immediate consequences for the capacity to create safer, healthier workplaces. 2 Is human error a cause or consequence? The role of humans in creating or reducing safety in the workplace can be viewed from two perspectives. The first perspective, which can be considered the ‘old’ view, has been termed ‘the bad apple theory’ (Dekker, 2006; Woods, Dekker, Cook, Johannesen & Sarter, 2010). According to this view:

Workplaces would be safe were it not for the erratic, ignorant or careless behaviour of some unreliable people (‘bad apples’). Workplace accidents are the result of human errors, violations and carelessness. Workplace safety incidents are unpleasant surprises that do not belong in the organisation.

According to the ‘bad apple theory,’ adverse outcomes can be avoided if individuals pay more attention, invest more effort, obey the rule or follow the procedure. OHS professionals and managers operating from this perspective wonder how they can cope with the unreliability of workers in their organisation. Reminders, posters, rules and procedures are often seen as a meaningful way forward (see section 3 below). OHS investigations conducted from this viewpoint find evidence for erratic, wrong or inappropriate behaviour. They bring to light people’s bad decisions, inaccurate assessments or deviations from rules or procedures, and single out particular workers for retraining, demotion, dismissal or other sanctions. Organisations that rely on such strategies are said to suffer from organisational learning disabilities (Argyris & Schön, 1978; Weick & Sutcliffe, 2007; Wildavsky, 1988). Managers operating from this perspective may feel better about themselves and succeed in appeasing regulators, owners, insurers and other outside parties, but rather than understanding and remedying causes of workplace accidents, they simply fight the symptoms.

An opposing view is that workplace incidents are an inevitable by-product of people doing the best they can in organisations that themselves are characterised by inherent contradictions between operational efficiency and safety. Most organisations need to achieve multiple goals at the same time, while under the pressure of limited time and resources, and often in a competitive environment. This 'new view of human error’ begins with the assumption that people do not come to work to do a bad job nor do they go out of their way to create trouble. If they do end up doing a bad job, the reasons should be sought not in the individual, but in the work setting. Most work situations contain multiple subtle vulnerabilities that may not be readily visible, and risks and safety threats that change over time. Organisations are constantly under pressure to find the right balance between doing things thoroughly and doing them efficiently (Hollnagel, 2009). From this perspective, workplace incidents are symptoms of trouble deep inside a system (AMA, 1998; Woods et al., 2010). According to this view:

Human errors or rule violations are not the cause of workplace incidents; instead, they are an effect, or symptom, of deeper trouble. Human errors or rule violations are not random; they are systematically connected to features of people's tools, tasks, operating environments and organisational constraints. Human errors or rule violations are not the conclusion of an OHS investigation or inspection; they are the starting point.

Managers and OHS professionals operating from this viewpoint seek to identify the systemic vulnerabilities behind individual behaviour. They address error-producing conditions that, if left in place, will result in repetition of the same basic pattern of failure. Their work is driven by the following beliefs:

Safety is never the only goal in organisations. Multiple interacting pressures and goals (relating to, for example, scheduling, competition, customer service and public image) are always at work. Trade-offs between safety and other goals often have to be made in a climate of uncertainty and ambiguity. While goals other than safety may be easy to measure (e.g. production targets), how much people and organisations borrow from safety to achieve those goals is difficult to measure. Systems are not basically safe. People in them have to create safety by tying together a patchwork of technologies, adapting under pressure and acting under uncertainty.

In the first, or old, view, OHS professionals and managers see themselves as custodians of already safe systems. Their role is to protect their organisation from unreliable, erratic or careless people. When something goes wrong, their question is: who is responsible? This is generally followed by an assessment of what should be done with this person, perhaps to set an example for others (Dekker, 2007). The judgmental and individual-focused language used in some investigations or inspections can sometimes (albeit unintentionally) infiltrate the justice system. A trend toward the criminalisation of human error and workplace safety violations (Dekker, 2011a) has grave consequences for the continued creation of safety cultures with an unencumbered flow of safety-related information (Dekker, 2007; Reason, 1997).

In the second, or new, view, OHS professionals and managers understand that people often have to create safety under less-than-ideal circumstances; people may be required to pursue multiple conflicting goals simultaneously, there may be time limitations and production pressures, or features of tools and tasks that make incidents more likely. When something goes wrong, their question is: what is responsible – what set of circumstances led to people doing their work this way? Consequently, accountability for failure and success is directed away from individual workers, and attention is drawn to the wider organisational setting in which safety is continually being created and compromised.

3 Is compliance with rules a sufficient or limited approach? A common notion in OHS is that not following rules can lead to unsafe situations (Hopkins, 2010). It is frequently assumed that:

Rules represent the best thought-out and thus the safest way to carry out a job. Rule following involves mostly simple if-then mental activity, i.e. if this situation occurs then this algorithm (rule or procedure) applies. Safety results from people following rules. For progress on safety, organisations must invest in people’s knowledge of rules and ensure that procedures are followed.

With the aim of standardisation, rules and procedures can play important roles in shaping safe practice. However, their application can become an easy, knee-jerk response to perceived problems. In the wake of a workplace incident, it can be tempting to introduce new procedures, change existing ones or enforce stricter compliance. This response may make sense, but it sometimes gets deployed because managers and inspectors do not really know what else to do, and it serves to cover their liability in any future related workplace incident. Introducing more procedures does not necessarily prevent the next incident, nor do exhortations to follow rules more carefully necessarily increase compliance or enhance safety.

Workers interpret procedures with respect to a collection of actions and circumstances that the procedures themselves can never fully specify (Suchman, 1987; Wright & McCarthy, 2003). In other words, procedures are not the work itself. Work often requires subtle local judgments with regard to timing of subtasks, relevance, importance, prioritisation and so forth. Safety, then, is not the result of rote rule following; it is the result of people’s insight into the features of situations that demand certain actions, and people being skilful at finding and using a variety of resources (including written guidance) to accomplish their goals. This suggests an opposing way of looking at the role of procedures in creating safety:

Procedures are resources for action. Procedures do not specify all circumstances to which they apply; they cannot dictate their own application and cannot, in themselves, guarantee safety. Applying procedures successfully across situations can be a substantive cognitive activity that requires skill. Safety results from people being skilful at judging how and when (and when not) to adapt procedures to local circumstances. For progress on safety, organisations must monitor and understand the reasons behind the gap between procedures and practice. Additionally, organisations must develop ways of enhancing people’s skill at judging when and how to adapt.

Compliance-based approaches to safety tend to rely on the assumption that the work to be done is inherently simple, or at least not complex. Indeed, compliance-based approaches can work well if:

The work to be carried out is entirely knowable and known; it is completely described, will not change, and contains no surprises or ambiguities. There is only one best method to carry out the work, and that method is specified fully and in complete detail, leaving no room for interpretation. The setting in which the work is done is closed to the environment; no outside influences or distractions can distract or impact the work or change the demands on it.

The problem, of course, is that very few jobs actually fit these criteria. This is why any system of safety that relies heavily on procedures and standardisation needs to be acutely sensitive to the limits of its reach. Do the rules remain applicable and appropriate over time? Have things changed in the environment?

Systems that are open to a constant flow of feedback about how best to carry out work are better able to adapt to changing circumstances (Weick & Sutcliffe, 2007). Also, this means that they are probably safer in the long run. An OHS professional who is attuned to such feedback can learn a lot more than one who is just monitoring whether people comply with existing rules and procedures. Indeed, the question might be whether the organisation has a good system in place for monitoring the gap between how it thinks (or hopes, or pretends) work is carried out, and how work is really carried out. There will always be a gap of some sort. Ideally, safety creation is not a blinkered drive to close that gap, but a sensitivity to organisational processes that have been set in place to monitor and understand the reasons for that gap (Dekker, 2003).

4 Is safety best conceptualised as absence of negatives or presence of capabilities? The realities of insurance, OHS legislation and regulation have resulted in an OHS-professional focus on things that can be easily measured, counted and tabulated. These are often negative things, such as workplace incidents or lost-time injuries. There is the expectation that reducing such negatives is good for safety, even though it may be good for little more than improving insurance rates or upholding an image of acceptable practice, insurability, managerial competence or regulatory compliance. The fact that things can be counted does not mean that they should be counted, or even that they have anything to do with safety.

Some recent large-scale accidents have highlighted the risks involved in maintaining an exaggerated focus on reducing OHS negatives. For example, on the eve of the largest oil spill in US history, executives on the Deepwater Horizon platform celebrated seven years without lost-time injuries (Elkind & Whitford, 2011). In this company, walking around without a lid on a coffee cup was a punishable OHS offence, but conducting a critical negative-pressure test in the process of deepwater drilling was not prescribed. A focus on little measurable things has a tendency to spawn organisational ‘fantasy documents’ that make everybody look compliant and well-prepared, but do nothing to enhance an organisation’s ability to prevent or contain disaster (Clarke & Perrow, 1996; Elkind & Whitford, 2011). In fact, a focus on easily measurable surface features may detract resources and attention from deeper problems that have the potential to wreak havoc (Dekker, 2011b).

A few years ago, as leading researchers in safety began to express doubts about the usefulness of negatives (violations, incidents) as targets for intervention, a school of thought termed Resilience Engineering emerged (Hollnagel, Woods & Leveson, 2006). Proponents of Resilience Engineering questioned the appropriateness of defining safety as the absence of something and of counting violations, tabulating incidents and then trying to eliminate them. Safety, it seemed, did not lie just beyond some incident-free horizon.

Resilience Engineering is based on the notion that people hold together inherently imperfect and contradictory organisations, and that people, at all organisational levels, create safety through practice. It proposes that safety is not about absence, but about the presence of something. It maintains that, even with time pressure, multiple goals and resource limitations, things go right not because of rigid adherence to safety rules and procedures (LaPorte & Consolini, 1991; Pidgeon & O'Leary, 2000; Rasmussen & Batstone, 1989; Reason, 2008), but because of adaptive capacity, that is, the ability of people and organisations to recognise, absorb and adapt to changes and disruptions, some of which may fall outside what the system has been designed to handle. Some of the identified attributes of resilient teams and organisations have relevance for OHS professionals:

Resilient organisations do not take past success as a guarantee of future safety; past results are an inadequate guide to ongoing effectiveness of adaptive strategies. Resilient organisations keep discussion of risk alive even when everything looks safe; the model of what is risky requires continual updating (Huber, van Wijgerden, de Witt, & Dekker, 2009). Resilient organisations are open to different and fresh perspectives on problems; they listen to minority viewpoints, invite doubt, and stay curious and open-minded (Weick & Sutcliffe, 2007). Resilient organisations have somebody or some function with the resources, the credibility and the authority to ‘put the foot down’ and invest in safety when everybody else says that they cannot. That is often the time when such investments are most necessary (Starbuck & Farjoun, 2005; Woods, 2003).

Resilience Engineering is not about reducing negatives (incidents, errors, violations), but about identifying and enhancing the positive capabilities of people and organisations that allow them to adapt effectively and safely under pressure. It is about creating ways of increasing the capacity of an organisation to accommodate change, conflict and disturbance without failure (Hollnagel, Nemeth & Dekker, 2008, 2009).

5 Is safety best addressed at component or system level? In OHS, it has been orthodox to think about safety as existing and failing at the component level of the individual. Through behavioural management and control of component performance, an impression of organisational or system-level safety is achieved. Engineers have tended to think about safety in a similar fashion; that is, a safe system can be constructed using components that do not break quickly, and by including extra or redundant components just in case (Perrow, 1984). To understand the potential for a failure or workplace accident, OHS professionals often make the following assumptions:

The malfunctioning of one or more components of an organisation (including individuals) is a meaningful way to predict the potential for workplace accidents because the relationship between component behaviour and system behaviour is analytically non-problematic. If they put in more effort, people at all organisational levels can more reliably foresee and forestall bad outcomes. They should stay with the one best method for their jobs and be fully cognisant of the physical and other laws by which their system behaves (otherwise they should not be allowed to work in it).

Accidents such as that experienced by Deepwater Horizon contradict this orthodoxy (Elkind & Whitford, 2011; Levin, 2010). Monitoring the behaviour of individual components and measuring their properties will not enable prediction of workplace failure at the system level (Dekker, Cilliers & Hofmeyr, 2011). This is typical of complexity; safety is a so-called ‘emergent’ property that cannot be seen clearly at the level of the components that make up the system. The growth of complexity in organisations has outpaced our understanding of how complex systems work and fail. Our technologies have raced ahead of our theories. While organisations are able to build things such as deep-sea oilrigs with properties that can be understood reasonably well in isolation and measured, tabulated, counted and checked off against a regulation, in competitive regulated societies, connections proliferate, and interactions and interdependencies multiply. Given such complexity, the potential for both success and failure escalates (McLean & Elkind, 2004).

Workplace accidents rarely happen out of the blue. Generally, there is an incubation period, a time during which practices and assumptions about risk change slowly and gradually (Dekker, 2011b; Dekker et al., 2011; Pidgeon & O'Leary, 2000; Turner, 1978). A drift into failure may result from gradual normalisation of behaviour previously considered deviant due, perhaps, to borrowing from safety margins over time (Vaughan, 1996, 1999, 2005); from the ambiguities and uncertainties associated with many technologies; and from the effects of dealing with competitive pressure (Rasmussen, 1997). In this manner, workplace accidents can be the result of a slow, steady series of little steps (Dekker, 2011b; Woods, et al., 2010).

Deepwater Horizon is an example of an organisation that drifted into failure (Dekker, 2011b). While pursuing success in a dynamic complex environment with limited resources and multiple conflicting goals, a succession of small everyday decisions eventually produced a breakdown on a massive scale, killing 11 workers. The measurement of nominal OHS successes (a reduction in lost-time injuries) served to mask a state of teeming complexity and festering normality that gave rise to the accident. Restricting the concept of safety to a hunt for broken parts, fixable components or people not following rules or procedures, results in a linear and componential view of safety problems.

Of course, the trajectory of drift is easy to see in hindsight; during day-to-day operations, it is much harder to identify. Individual and organisational practices continually adapt to new realities and accommodate new insights into how to do things better. This adaptive capacity, as noted in section 4, creates resilience and contributes to organisational health and survival. However, small, slow and steady adaptations can provide a route to either success or failure (Hollnagel, 2009; Vaughan, 1999). The way to deal with this conundrum is to understand the value of diversity in opinion, method and technique. Brittleness can result from, for example, listening to only one set of stakeholders, sticking with only one method for pursuing safety or pursuing a narrow set of goals (e.g. regulatory compliance). Organisational, operational, managerial and regulatory responses to risk may become outdated and irrelevant, as in the case of Deepwater Horizon. Things can look good on paper one day, but result in the death of 11 people the next.

6 Summary The complexity of organisational life demands an understanding by OHS professionals that not all workplace safety is created in the same way. In some cases (which meet the conditions outlined at the end of section 3), compliance-based approaches may pay immediate dividends. In many cases they will not. It is important to understand why people work they way they do, even if they depart from the written guidance intended for that work. It is likely that those who work a job every day have a subtle, but highly calibrated sense of what is risky and what is not, what works and what does not (Woods & Cook, 2002). This is not to imply that they are necessarily right, but it does speak to the value of diversity. Being open to fresh perspectives, and understanding and discussing the possibilities for creating safety in practice will create the best foundation for safer workplaces.

7 References AMA (American Medical Association). (1998). A Tale of Two Stories: Constrasting Views of Patient Safety. Chicago, IL: American Medical Association.

Argyris, C., & Schön, D. (1978). Organizational Learning: A Theory of Action Perspective. Reading, MA: Addison-Wesley.

Clarke, L., Perrow, C. (1996). Prosaic organizational failure. American Behavioral Scientist, 39(8), 1040-1057.

Dekker, S. W. A. (2003). Failure to adapt or adaptations that fail: Contrasting models on procedures and safety. Applied Ergonomics, 34(3), 233-238.

Dekker, S. W. A. (2006). The Field Guide to Understanding Human Error. Aldershot, UK: Ashgate Publishing Co.

Dekker, S. W. A. (2007). Just Culture: Balancing Safety and Accountability. Aldershot, England: Ashgate Publishing Co.

Dekker, S. W. A. (2011a). The criminalization of human error in aviation and healthcare: A review. Safety Science, 49(2), 121-127.

Dekker, S. W. A. (2011b). Drift into Failure: From Hunting Broken Components to Understanding Complex Systems. Farnham, UK: Ashgate Publishing Co.

Dekker, S. W. A., Cilliers, P., & Hofmeyr, J. (2011). The complexity of failure: Implications of complexity theory for safety investigations. Safety Science, 49(6), 939-945.

Elkind, P., Whitford, D. (2011, 24 January). BP: 'An accident waiting to happen'. Fortune.

Hollnagel, E. (2009). The ETTO Principle: Efficiency-Thoroughness Trade-Off. Why Things That Go Right Sometimes Go Wrong. Aldershot, UK: Ashgate Publishing Co.

Hollnagel, E., Nemeth, C. P., & Dekker, S. W. A. (2008). Resilience Engineering: Remaining Sensitive to the Possibility of Failure. Aldershot, UK: Ashgate Publishing Co.

Hollnagel, E., Nemeth, C. P., & Dekker, S. W. A. (2009). Resilience Engineering: Preparation and Restoration. Aldershot, UK: Ashgate Publishing Co.

Hollnagel, E., Woods, D. D., Leveson, N. G. (2006). Resilience Engineering: Concepts and Precepts. Aldershot, UK: Ashgate Publishing Co.

Hopkins, A. (2010). Risk-management and rule-compliance: Decision-making in hazardous industries. Safety Science, 49(2), 110-120.

Huber, S., van Wijgerden, I., de Witt, A., Dekker, S. W. A. (2009). Learning from organizational incidents: Resilience engineering for high-risk process environments. Process Safety Progress, 28(1), 90-95.

LaPorte, T. R., & Consolini, P. M. (1991). Working in practice but not in theory: Theoretical challenges of "High-Reliability Organizations." Journal of Public Administration Research and Theory: J-PART, 1(1), 19-48.

Levin, A. (2010, 9 September). BP blames rig explosion on series of failures. USA Today, p. 1.

McLean, B., Elkind, P. (2004). The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron. New York: Portfolio.

Perrow, C. (1984). Normal Accidents: Living with High-risk Technologies. New York: Basic Books.

Pidgeon, N., O'Leary, M. (2000). Man-made disasters: Why technology and organizations (sometimes) fail. Safety Science, 34(1-3), 15-30.

Rasmussen, J. (1997). Risk management in a dynamic society: A modelling problem. Safety Science, 27(2-3), 183-213.

Rasmussen, J., & Batstone, R. (1989). Why Do Complex Organizational Systems Fail? Washington, DC: World Bank.

Reason, J. T. (1997). Managing the Risks of Organizational Accidents. Aldershot, UK: Ashgate Publishing Co.

Reason, J. T. (2008). The Human Contribution: Unsafe Acts, Accidents and Heroic Recoveries. Farnham, UK: Ashgate Publishing Co.

Starbuck, W. H., & Farjoun, M. (2005). Organization at the Limit: Lessons from the Columbia Disaster. Malden, MA: Blackwell Publishing.

Suchman, L. A. (1987). Plans and Situated Actions: The Problem of Human-machine Communication. New York: Cambridge University Press.

Turner, B. A. (1978). Man-made Disasters. London: Wykeham Publications.

Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago: University of Chicago Press.

Vaughan, D. (1999). The dark side of organizations: Mistake, misconduct, and disaster. Annual Review of Sociology, 25, 271-305.

Vaughan, D. (2005). System effects: On slippery slopes, repeating negative patterns, and learning from mistake? In W. H. Starbuck & M. Farjoun (Eds.), Organization at the Limit: Lessons from the Columbia Disaster (pp. 41-59). Malden, MA: Blackwell Publishing.

Weick, K. E., & Sutcliffe, K. M. (2007). Managing the Unexpected: Resilient Performance in an Age of Uncertainty (2nd ed.). San Francisco: Jossey-Bass.

Wildavsky, A. B. (1988). Searching for Safety. New Brunswick, USA: Transaction Books.

Woods, D. D. (2003). Creating foresight: How resilience engineering can transform NASA’s approach to risky decision making. Washington, D. C.: US Senate Testimony for the Committee on Commerce, Science and Transportation, John McCain, chair.

Woods, D. D., Cook, R. I. (2002). Nine steps to move forward from error. Cognition, Technology & Work, 4(2), 137-144.

Woods, D. D., Dekker, S. W. A., Cook, R. I., Johannesen, L. J., Sarter, N. B. (2010). Behind Human Error. Aldershot, UK: Ashgate Publishing Co.

Wright, P. C., McCarthy, J. (2003). Analysis of procedure following as concerned work. In E. Hollnagel (Ed.), Handbook of Cognitive Task Design (pp. 679-700). Mahwah, NJ: Lawrence Erlbaum Associates.