Operator Decision Making Before Starting and While Doing an Activity / Task : Development and Evaluation of Integrated Front-end Priming and Heuristic Risk Management Tools for Operators

Motivation – This paper describes ongoing PhD research to develop and evaluate practical tools for use by operators to manage and control their risks in a dynamic environment, before starting and while undertaking an activity/task. Research approach – Three different industries are examined: mining, railway and construction. In each, several different comparative high-risk activities are examined; for example, working at heights. Findings/Design – Initial observations are currently taking place, before subsequent detailed data collection. Research limitations/Implications – Final risk management and decision making tools developed will likely differ for application depending on the company’s maturity in risk management and level of risk by task/activity type. Originality/Value – This ongoing research explores the area of risk management and decision making, and is looking to marry together the appropriate common sets of systems and cognitive perspectives. Take away message – Practical tools, possibly computer-based, can be developed to help operators manage and control risks.


INTRODUCTION
In many high-risk industries, strategic and tactical risk management processes have been advocated and disseminated in a top-down approach from managers and supervisors, who are often the risk makers, to the operators managing risk at the coal-face, who are the risk takers (Orasanu, Fisher and Davison, 2002).This has resulted in an expectation, by managers and supervisors that the systems thinking approach, for risk management of uncertainties, will be used by the operators when making decisions, in an attempt to manage and control operational risks.
Johnston, Driskell and Salas 1997 contend that there are times, such as during a sudden and unexpected threat, that the systems thinking approach, or what they call 'vigilant decision making', is not sufficiently linked with the results of operational decision making, i.e. using operator cognition, or what they call 'hypervigilant decision making'.This is also supported by Lipshitz (2006), who contends that naturalist decision making (NDM) and organisational decision making (ODM) have a focus in habitats which do not interact.
In the managers' domain, an important issue at the time of conducting a formal risk assessment is that not all factors will be known or anticipated, and as a result the strategic or tactical frameworks of risk management information may have to be gleaned from subject matter experts or further research.
In the operators' domain, there may be fewer unknowns compared to managing at a strategic level, or supervising at a tactical level, and activities are generally rooted in ongoing processes (Frame, 2003), but often there is not the luxury of time for reality checks.Many of these operators are trade-based occupations used to a more traditional vocational and therefore practical approach to conducting business, rather than applying the art and science of management systems.Thus, the particular focus of this research is on operators who are generally in skill-based positions in a task environment, where much of the work is performed in a largely automated manner, or at least where little system disturbance usually occurs.
Whilst at the strategic and tactical end of the business, the skills and resources, including time, are often available for the systems approach, it is not scalable to the sharp-end of the business because operators often: • have not been trained in, nor have system thinking skills to apply to risk management; • are not motivated to fill in the paperwork in a systems model, which is seen as a covering exercise; • treat the paperwork as a 'tick and flick' exercise just to be able to move on and get on with the job, and • have previous task experience, and reduced hazard perception, e.g.operators claim 'it's ok because I'm experienced in this task -I've done it a thousand times before without a negative consequence'.
The last dot point above may seem like a reasonable excuse and it may be because the operators are generally applying rules-of-thumb, also known as heuristics, from the cognitive models they have built up from years of exposure to the same or similar problems.The problem arises when 'they don't know what they don't know' and there is no widely accepted process to conduct risk assessments on the 'black swan events' (Taleb, 2007), i.e. those that are highly improbably, but if they happen have a catastrophic outcome.
So whilst management expect a systems approach to risk management, in practice operators often take a 'cognitive' approach to risk management using heuristics.The literature review to date (presented below) has shown that the systems thinking and cognitive thinking theories and practices tend to be mutually exclusive sets.
There needs to be a catalyst to help operators manage and control these risks at the coal face.It is anticipated that the practical tools to be developed and implemented in this research will have an impact on that area.

BACKGROUND Systems vs. Cognitive Approach/Thinking
The systems thinking approach to managing risk arose out of what is defined as 'The Safety Management Era' from the 1950s and 1960s (Petersen, 2001).With the rapid improvement in frequency and severity rates, the safety management era was seen as a success.The systems approach was further entrenched through what Petersen defines as the 'Occupational Health and Safety Act Era' where there was an expectation that besides complying with statutory and regulatory requirements; documenting everything became the norm.In some companies this remains the status quo.
Through the introduction and general industry acceptance (in Australia) of AS 4360:2004 'Risk Management', risk management at a strategic and tactical level of business seems to be well founded in a systematic approach.With strategic decisions being more a management function, and tactical decisions being more a supervisory function this systems approach is perhaps appropriate.

Dynamic Assessment of Risk
However, in the field, operators are struggling to apply the principles of this systematic approach on a 'dynamic' basis.Dynamic is mentioned in the context of the work environment and activities that are changing on an almost constant basis (Orasanu et al, 2002), or what Lipshitz et al (undated) define as the continuum of uncertainty, which comes in three forms for operators: • inadequate understanding of the task; • incomplete information to meet the objectives, and • undifferentiated alternatives from a skills, experience and competence basis.(Lipshitz et al, undated) The word 'dynamic' also conjures up the concept of the need for constant alertness for operators (Lipshitz et al, undated).However, we all know from our own experience that constant alertness is an impossible state to maintain for anything more than very brief periods.Managers and supervisors often expect operators to complete complex risk analysis paperwork before undertaking activities/tasks without any understanding of the process and the reason why it is required.Thus the whole issue of what is trying to be achieved from a proactive systems thinking aspect is lost on operators.Often this approach requires paperwork in the form of a multi-sheet job safety analysis (JSA).The original and real, practical use for JSAs is when: • a task has never been undertaken before and therefore no procedures exits; • there has been significant changes (to the environment, materials, plant and equipment or people); • the activity/task is not routine, or • the activity/task is routine but has inherent high risks associated with it.
JSAs were originally designed as an aide memoire at an operational level for guiding a cognitive process.Originally the JSAs were simple dot points to guide thinking on: • What are we doing/trying to achieve?
• With what?
• What can go wrong?
• How can we stop it going wrong?
It is interesting to note that the original JSA process, which was an operational process, has become systemised, (read corrupted) to a complex systems thinking approach, to become an attempt to transfer the risk to supervisors and operators, that legally does not stand up in the 'duty of care' model many people in first world countries have to operate under.
It soon becomes clear on any work site that formal and strategic/tactical risk management principles have their limitations when applied; for example, by the operator about to use a high-energy drill to put holes into the rock face and who has undertaken that activity/task hundreds, if not thousands, of times before.At a tactical level the pressures of time, cost and project uncertainty, (Zsambok et al, 1992) can be transferred to operators who also have to contend with task uncertainties and ambiguities.
A significant concern is that managers and supervisors get a 'warm and fuzzy' feeling that operators have filled in the JSA paperwork without worrying about the quality of the process.In other words, the risk management process is being conducted in a passive manner (Robinson, 2006), and as previously mentioned in an attempt to transfer the risk down the organisational structure.
Whilst there is well founded research in cognitive thinking theory, most notably in the past by Gigerenzer and Klein, but more recently by Lipshitz, Orasanu and Fischer, it appears that many companies entrenched in the practice of completing JSAs for every activity/task and see this as valid use of resources.Management and supervisors claim that risks cannot be in a managed and controlled state unless they have a piece of paper to say operators have followed a systematic process.However, more often than not, the process becomes a 'tick-and-flick' exercise by operators, because as some say 'we have a job to get on with and doing the paperwork impacts negatively on our production bonuses'.
Another issue to consider in the systems approach of undertaking JSAs is that this only works with explicit knowledge, i.e. that knowledge you can explain.It does not translate the tacit knowledge into something that can be used, i.e. that knowledge that you cannot explain, but is used many times in conducting the simplest of tasks, (Nonaka and Toyama, 2003).
However, the positive side of the systems approach to JSAs is that there is some effort to formalise the decision making process and review it before something goes wrong.Whereas in the majority of cases, judgements on people tend to be made on outcomes of things that went wrong.

Potential Risk Management Tools
There is potentially a better way to conduct operational risk management.This could be based on a number of models variant upon the maturity of an origination's risk and safety culture, pulling from both systems and cognitive theories and practices, rather than using the scaled down process of the traditional and formal risk management process.
It is anticipated that the application of a range of practical tools for risk management, at the operational level before starting and while doing an activity/task, whilst are yet to be developed and implemented and evaluated, will depend on an individual and collective level of competence and risk-maturity at the workplace.From a proactive risk management approach, on the face of it, these operational risk management tools, which are expected to be based on 'rules-of-thumb' or heuristics, may appear adequate as standalone tools.These tools will generally be developed from the cognitive theory and practices set, but may also draw on elements of systems thinking theory.
It is also anticipated that the heuristic decision making tools will need to be supported by what our early research has labelled 'front-end priming tools'.These front-end priming tools put context around the risks dealt with on a day to day basis, and may be computer-based.Whist undertaking activities/tasks, these front-end priming tools should allow the cues to be generated that help operators to recognise patterns that allow them to make decisions to take appropriate action to effect the situation (Klein, 2004).These are essentially tools whereby every reasonably conceivable thing that can go wrong is identified and operators are put into simulated situations to practice implementing controls to quickly recover or soften the consequence.These tools will generally be developed from the systems theory and practices set, but may also draw on elements of cogitative thinking theory as discussed in the research question/hypothesis section.
As discussed in further detail below, these tools will also assist in the development of credible scenarios that need to be managed whilst undertaking activities/tasks.
Lipshitz et al, undated, discuss the principles of 'plausible stories' and it is well recognised that at the strategic end of the business leaders create vision though this story telling.However often these stories and not translated well to managing risk at the operational end of the business.Somehow these stories need to be converted to a tactical understanding and then an operational understanding.
It may be argued that these priming tools already exist, and they do to a certain extent in the form of inductions, jobstarts and toolbox meetings.However, more often than not, although lots of rhetoric is heard to the contrary, these tools become one-way information flows from supervisors delivering the information to operators who just want to get on with the job.The effective processing of the information by operators is rarely measured.The front-end priming tools should make allowance for two way interaction and measure the depth and quality of operator processing of the information, hence making them computer-based is likely.
There simply is not enough time in the world of uncertainty to make the optimal decision, the one where it can be proven that no better solution exists (Gigerenzer, 2007).As such, this research aims to develop, implement and evaluate front-end priming tools that are quick, simple, satisfying to use and most importantly ensuring that individuals and business are 'primed' for most threats faced.These front-end priming tools will also support practical risk management decision making tools (heuristics) that will be developed, implemented and evaluated such that they allow operators to make the appropriate 'good-enough' decision as opposed to the optimal decision.
Phillips et al ( 2004) discuss the fact that a system thinking approach is good for rule based tasks but this approach cannot approximate human judgement when it comes to high complex cognitive tasks.And so it is hoped that this research will lead to development of a combination of front-end priming (systems) and heuristic decision making (cognitive) tools to allow operators to manage and control risks in a dynamic environment in line with meeting up-line tactical and strategic goals.
Based on an earlier work by Dreyfus and Dreyfus (1980), a derived model to support the tools to be developed is depicted in the following figure: Our model above depicts the increasing use of heuristics (cognitive set) as the skill levels increase and at the same time a decrease in the reliance on front-end priming tools (systems set).This is supported by Gigerenzer's (2007) principle that intuition can be improved by replacing complex procedures that are in danger of being misunderstood and circumvented, by simple and empirically informed heuristics.
It is contended that through this adapted model competence is more important in assisting decision making for risk management through cognitive processes than skill alone.Competence is a combination of skills and experience that can be applied in a practical application for an outcome that can be measured against a set standard.As such a competent person should be able to describe, explain and predict/react to situations, or as Lipshitz et al 2001 describes it -working with different levels of mental models.However, as previously discussed, it is recognised that at a cognitive level there is the issue of tacit information that is used at an operational level.

Impacts on Risk Assessment
One of the biggest concerns in operational risk management is risk blindness (Blake, 2008).Risk blindness occurs when an event has not happened in a long time, or ever, or the work environment changes over a long period of time.This relates to the previous discussion the 'black swan' theory (Talib, 2007).With the inclusion of the catastrophic outcome it may begin to answer the question 'how can experts know so much yet predict so badly?', (Phillips et al, 2004).These events, or combinations of them, even with rational awareness of the risks, lead operators to become blind to the risk.
It is also anticipated that the use of front-end priming tools should shift the paradigm of not needing to manage chronic consequences, i.e. using the analogy of dropping the frog in the slow boiling pot, as opposed to dropping the frog into water to get an instant reaction to a situation that is unwanted, or staring to become unwanted, regardless of being a chronic or acute outcome.
Another concern in managing operational risk is the effect of individual biases based on cognitive factors, most commonly derived from poor memory or information (Breakwell, 2007).The front-end priming should make allowance of this weakness in operational risk management.
Recently, and in conjunction with our theory on priming tools, cockpit and cabin operations have moved towards a 'collaborative consultation and communication' process rather than the status based 'command and hint' process, (Fischer and Orasanu, 2000).Perhaps there is room for the thinking-out-aloud or 'brainstorming' approach used by commercial and combat pilot crews in other high-risk industries, i.e. a team based cognition approach, (Orasanu et al, 2002).This would the sharing of task related risk-critical information for decision making, (Fisher et al, 2007) and capitalise on the knowledge synergy of the team (Salas, 2000).
Unfortunately, the reality is that general business does not have the resources to conduct these priming exercises on the scale of the military or high-risk transport modes.Often these other businesses rely on 'sheep-dip' training and other 'one-way' priming tools previously discussed, that do not allow any contextual development of the issues/risks operators deal with in their workplace.In the case of managing risk, risk-critical information sharing has its highest impact when knowledge management is designed, implemented and managed such that it contributes to the collective cognition skill, rather than the individual skill base alone, (Meso et al, 2002).This enables the maintenance of a set of risk-critical corporate knowledge.This ongoing research investigates these issues; the remainder of this paper discusses the start of a PhD research journey and the planned actions to develop practical tools suitable for use by operators at the sharp-end of the business to manage and control their risks in a dynamic nature.These tools will take advantage of a combination of the appropriate aspects of systems thinking and cognitive theories and practices.

RESEARCH QUESTIONS/HYPOTHESES
The systems approach to formal risk assessments and the cognitive approach to managing operational risks appear to be two mutually exclusive domains.To our knowledge, no previous research has explored the what, how, where, when and why of this issue, to derive practical and complementary tools for risk assessments, before starting and whilst undertaking activities/tasks.There may be a natural common set to apply the appropriate elements of each domain of theory and practice to give swift and practical decision making process to manage and control risk in the field, as shown in the figure below.It is anticipated that the PhD research will give credence to such a simple model(s) by demonstrating that the use of practical tools will lead to outcomes well above better than chance.

Systems Theory and Practice
Cognitive Theory and Practice Common set for development of front-end primer and heuristic tools

SCOPE OF THE RESEARCH
It is not anticipated that any laboratory research will be conducted in this study due to its limited utility, (Klein and Calderwood, 1991).It is anticipated that the field studies of people in the real world, working under the pressures of schedule and cost, uncertainties and ambiguities, will yield the appropriate validation of the academic theories and models developed.

Activities
There are a number of high-risk activities that can roll off the top of the tongues of experienced risk management practitioners.However, for this research the intention was to find a number of activities that had commonalities across the industry research partners chosen to allow 'apples-for-apples' comparisons.At the same time, hopefully these activities will give some interesting cross-industry improvement opportunities from some nuances in the risks associated with those industries/activities.
Three high-risk activities were initially judged to be sufficient give enough research data to support or disprove the hypotheses.However, this does not preclude adding further activities to the research later.
The two high-risk activities that stood out above all others in terms of both high incident rates/high potential consequence and to allow for meaningful comparisons across industries were: • working at heights, and • simultaneous operations (i.e. the interaction of people and plant/equipment in close environs).
These high-risk activities were considered suitable because they should, from an operational risk point of view, put operators in a constant state of flux, i.e. managing the expected, but being aware and ready to react to the unexpected.This is what Weick and Sutcliffe ( 2001) call the cycle between action and interpretation; or simply, being focused and aware of your environment.
At the initial meeting held with each of the industry research partners the research team presented initial opinion (based on safety data, supported by professional opinion) of the two high-risk activities that should be include in the research, namely working at heights and simultaneous operations.Interestingly, these were agreed with the industry research partners, who said that they would have picked the same high-risk activities -thus giving additional convergent validity to the research team's draft choices.
The third high-risk activity was left unannounced, with a tactic of trying to derive it specifically from a study of the industry research partners' incident and accident statistics and risk registers.The danger of this strategy being that the research may be high-jacked into some hobby-horse issue of a particular manager, or even expanded to looking at a further three high-risk activities instead of one.As the research further progresses this will need to be carefully managed to ensure there is no scope/expectation creep, which could blow out the field research effort and make the PhD research topic too sprawling and diffuse.

Industries Chosen
Three industries were judged to be sufficient to produce enough data to support or disprove the hypotheses, with one company in each industry group.More industries or companies can, of course, be added later.
At the request of, and in order to not disclose the companies involved in the research, only industries have been nominated, not the actual company names.
The chosen industry research partners had to meet a number of criteria: • regularly conduct well recognised high-risk activities; • be in an industry that allowed established research links to be built upon, and • be locally based to allow for economy of scales in site visit logistics.
As such, the three companies came from the mining industry, the railway industry (and in particular one that conducted significant maintenance activities) and the construction industry.

METHOD
As this PhD research is still in early stages, the methodology is still provisional.The likely stages are: • identify draft high risk activities and target likely industry partners; • meetings with the companies to discuss the PhD conceptually and sell what is the likely benefit for them; • upon general agreement, finalise scope of research with each company.This included developing a memorandum of understanding (MOU) between the research team and the company, that sets out each parties understanding of the process, timeframes and expected outcomes; • follow-up meetings to discuss the MOU and set logistics for getting company-specific access to incident and accident data and risk registers and nominating times for site visits; • soft visits in the first round of outings, i.e. more passive task observations, trust building and establishing personal contact with operators and managers; • subsequent visits with: o review of operational risk management tools in current use before undertaking activities/tasks; further observations to better quantify issues (e.g. the number of critical interactions between pedestrians and mobile equipment) o operator interviews (e.g. using Critical Decision Method) to further probe critical decisions made at the operational level whilst undertaking activities/tasks, • manager and supervisor focus groups and surveys on why and how risk management decisions were made; • distil key results, including analysis of culture; • development of operational risk management tools, i.e. the front-end priming tools and decision making tools; • test and fine tune operational risk management tools, and • document research findings and conclusion.

INDICATIVE RESULTS
At the time of writing, the initial observations are being analysed.Provisional results of these observations may be presented at together with a more comprehensive draft work plan for the remainder of the research.

Likely Findings
Without overly pre-judging the results, based on the literature reviewed, the initial observations and the researcher's general familiarity with the issues the likely findings are that operational risk management is not being well managed because: • for most high-risk industries, whilst risk management at higher levels (strategic and tactical) is well founded within a systems thinking framework that appears to deliver results appropriate to the nature and scope of the risk and level of effort required, when treated as a scalable process it does not translate well to operational risk management; • risk management at the operational level is something that line managers expect operators at the sharp end of the business to perform, often with very little formal training in the context of the risks to be managed and the methodology to mitigate those risks, although post-incident, operators are perceived by managers and supervisors as experts in those specific tasks (Hoffman et al, 2002); • for the three industries chosen for this research operators at the sharp end of the business are over-burdened with systems thinking paperwork that often adds little value to operational risk management; • generally the focus is on the paperwork and not the plan whereas it should be the other way around and include allowance for debriefing and sharing lessons learned in a safe environment (Neta et al, 2009); • there is no focus on measuring and rewarding the process, only the hunt for the guilty parties and punishment as a result of adverse outcomes, and • it is the contention of this research that there is a better way to conduct operational risk management based on a number of models variant upon the maturity of an origination's risk and safety culture, using both systems and cognitive theories and practices, rather than using the scaled down process of traditional risk management.

CONCLUSION
As this is ongoing PhD research in its early phase, no hard conclusions can be drawn yet.However, it is important research that is looking to find common sets of systems and cognitive perspectives to help operators at the sharp end of the business to manage risks in a dynamic environment.It is envisioned that the likely tools emerging will be: • primer tools for risk awareness, possible computer based; • heuristics for managing risk in a dynamic workplace, and • reward tools for correct/effective process or behaviour to reinforce management and supervisors expectations.
Although operational risks may have less unknowns than strategic and tactical risks, and operational risks are generally rooted in ongoing processes, there will be times when due to the dynamic workplace things will become unfamiliar and the decision maker will be faced with uncertainty and ambiguity.This uncertainty may come in the form of missing or invalid data, ambiguity due to competing issues and complexity that interferes with sense-making (Crandall et al, 2006 and Lipshitz et al, undated).The front-end priming and supporting decision making tools need to be able to assist operators at the sharp end of the business deal with these issues/risks.With a successful and harmonious implementation of these common sets, the likely result will be a reduction in unwanted events causing harm to people and the environment or damage to property.In particular, a focus will be put on high-risk activities to test the tools and of interest in low likelihood but high consequence events -sometimes known as the 'black swans' (Talib, 2007).

Figure 1 -
Figure 1 -Adapted 5 Five-Stage Model of the Primers and Heuristics Involved at Different Competence Levels

Figure 2 -
Figure 2 -Common Set From Systems and Cognitive Theories and Practices to Apply in Front-end Priming and Heuristic Decision Making Tools