Wednesday, December 30, 2015
Turn around don't drown video
Managing risk requires understanding the risk; the National Weather Service is doing its part with this video about the risk of driving across a flooded road:
Monday, December 21, 2015
Is a Self-Driving Car Safe?
Matt McFarland's article about the safety of robots, drones, and self-driving cars highlights the need for testing of all types: software tests, hardware tests, simulations, and operations on test tracks and real roads.
As the article mentions, the big question: how can one know if the robotic system is safe?
Safety must be relative to an acceptable risk threshold, and setting that threshold will be an important conversation. Testing will have to show that the likelihood of an accident is sufficiently low and that the damage, if an accident occurs, is acceptably low.
In their draft requirements, the California Department of Motor Vehicles suggested that a third-party testing organization should verify that the vehicle is safe.
Links:
https://www.washingtonpost.com/news/innovations/wp/2015/12/18/the-billion-dollar-robot-question-how-can-we-make-sure-theyre-safe/
http://www.dmv.ca.gov/portal/dmv/detail/pubs/newsrel/newsrel15/2015_63
As the article mentions, the big question: how can one know if the robotic system is safe?
Safety must be relative to an acceptable risk threshold, and setting that threshold will be an important conversation. Testing will have to show that the likelihood of an accident is sufficiently low and that the damage, if an accident occurs, is acceptably low.
In their draft requirements, the California Department of Motor Vehicles suggested that a third-party testing organization should verify that the vehicle is safe.
Links:
https://www.washingtonpost.com/news/innovations/wp/2015/12/18/the-billion-dollar-robot-question-how-can-we-make-sure-theyre-safe/
http://www.dmv.ca.gov/portal/dmv/detail/pubs/newsrel/newsrel15/2015_63
Monday, November 30, 2015
How to plan a test
Testing generates information that can be used to make a decision. Testing can occur at any stage in the development of a product or system; it can test a specific attribute or overall performance; it can test a component, a subsystem, a system, or a system-of-systems.
Planning a test requires making crucial decisions: Which item to test? Which test to perform? How many tests to perform?
The test plan determines the value of the information gathered and the time and cost of testing. In general the key tradeoff is that gathering more valuable information requires more time and cost.
My students and I have developed some techniques for making test planning decisions.
Which facility to use? In some cases, a system (such as a military vehicle) needs to be used in an operational environment, but there are no existing test facilities like that environment. Instead of building a new test facility, one could use a combination of existing facilities to replicate the new operational environment. We developed an optimization model to find the test plan that used the best combination of existing facilities examples. For military vehicle applications, it specified the time and number of miles that the test vehicles should run on each existing test facility. Reference: http://www.isr.umd.edu/~jwh2/papers/IEST.pdf
Which configuration to test? A system-of-systems (SoS) consists of relatively independent systems. For example, a missile defense systems has control stations, radar locations, and rocket launchers. If the reliability of the SoS is unknown because the reliability of the systems is unknown, then testing is needed, but testing a full-scale configuration is expensive. We developed a simulation technique to predict the results of tests with smaller configurations under different scenarios and estimate the expected error of the tests. With this information, the test planning decision-makers could evaluate the tradeoffs of cost and expected error and determine the best test configuration. Reference: http://www.isr.umd.edu/~jwh2/papers/Tamburello-Herrmann-JRR-2015.pdf
Which attribute to measure (test)? In a multiattribute decision making situation, measurements of the attributes are valuable for knowing which alternative is best. If these measurements have error and the total budget for measurements is limited, then it is crucial to measure the attributes in a way that provides the most valuable information and increases the likelihood of selecting the truly best alternative. We developed and tested rules for determining which attributes should be measured how many times and showed that better rules can significantly increase this likelihood. Reference: http://www.isr.umd.edu/~jwh2/papers/Leber-Herrmann-ISERC-2015.pdf
Which test to perform? Demonstrating the reliability of a complicated system (such as a liquid rocket engine) requires testing the system and its components and subsystems. These tests and the associated hardware are expensive and require time at scarce test facilities. We developed a multi-objective test plan optimization approach to determine the best test plan that meets the demonstrated reliability. Reference: http://www.isr.umd.edu/~jwh2/papers/Strunz-Herrmann-CEAS-Space-2011.pdf
Planning a test requires making crucial decisions: Which item to test? Which test to perform? How many tests to perform?
The test plan determines the value of the information gathered and the time and cost of testing. In general the key tradeoff is that gathering more valuable information requires more time and cost.
My students and I have developed some techniques for making test planning decisions.
Which facility to use? In some cases, a system (such as a military vehicle) needs to be used in an operational environment, but there are no existing test facilities like that environment. Instead of building a new test facility, one could use a combination of existing facilities to replicate the new operational environment. We developed an optimization model to find the test plan that used the best combination of existing facilities examples. For military vehicle applications, it specified the time and number of miles that the test vehicles should run on each existing test facility. Reference: http://www.isr.umd.edu/~jwh2/papers/IEST.pdf
Which configuration to test? A system-of-systems (SoS) consists of relatively independent systems. For example, a missile defense systems has control stations, radar locations, and rocket launchers. If the reliability of the SoS is unknown because the reliability of the systems is unknown, then testing is needed, but testing a full-scale configuration is expensive. We developed a simulation technique to predict the results of tests with smaller configurations under different scenarios and estimate the expected error of the tests. With this information, the test planning decision-makers could evaluate the tradeoffs of cost and expected error and determine the best test configuration. Reference: http://www.isr.umd.edu/~jwh2/papers/Tamburello-Herrmann-JRR-2015.pdf
Which attribute to measure (test)? In a multiattribute decision making situation, measurements of the attributes are valuable for knowing which alternative is best. If these measurements have error and the total budget for measurements is limited, then it is crucial to measure the attributes in a way that provides the most valuable information and increases the likelihood of selecting the truly best alternative. We developed and tested rules for determining which attributes should be measured how many times and showed that better rules can significantly increase this likelihood. Reference: http://www.isr.umd.edu/~jwh2/papers/Leber-Herrmann-ISERC-2015.pdf
Which test to perform? Demonstrating the reliability of a complicated system (such as a liquid rocket engine) requires testing the system and its components and subsystems. These tests and the associated hardware are expensive and require time at scarce test facilities. We developed a multi-objective test plan optimization approach to determine the best test plan that meets the demonstrated reliability. Reference: http://www.isr.umd.edu/~jwh2/papers/Strunz-Herrmann-CEAS-Space-2011.pdf
Saturday, November 21, 2015
How to React to an Earthquake
One cannot prevent an earthquake, but one can protect people and property from its impact. Traditionally, this means designing and building structures that will survive the earthquake without damage (preventive actions). Now, because of earthquake warning systems, contingency planning is becoming feasible.
Earlier this month, The Washington Post's Eric Niiler wrote about technology that can tell that an earthquake is about to happen. The November 3 article ("Last-minute warnings may make quakes less destructive") states that a combination of ground sensors and satellite-based instruments can give a warning a minute or two before the earthquake occurs.
The technology works by detecting the P-waves that precede the earthquake's shock (the sound waves are an earthquake precursor). The ShakeAlert earthquake early warning system for the west coast of the United States was developed by the United States Geological Survey (USGS) and university partners. According to Niiler, Chile, Mexico, China, Taiwan, Turkey, and Israel are developing similar systems.
Although a minute is not enough time to evacuate large buildings or a city, it is enough to shut down (or put into a safe mode) critical systems such as trains, gas lines, elevators, tunnels, and bridges. Unfortunately, it is not enough time to send text messages to millions of people simultaneously, so earthquake alerts go only to emergency agencies, utilities, and similar high-priority organizations.
For more about ShakeAlert, see the fact sheet at http://pubs.usgs.gov/fs/2014/3083/pdf/fs2014-3083.pdf
Tuesday, November 3, 2015
Small pilots, big risks
On October 24, 2015, The Washington Post reported that testing of the F-35 Joint Strike Fighter has shown that the ejection seat system poses a risk of whiplash to pilots. In particular, pilots who weigh less than 136 pounds face a "high" risk of danger. "Mid-weight" pilots face a "serious" risk.
The Post also reported that the mass of the pilot's helmet increases the risk because it is too heavy.
Until the risk can be mitigated, the F-35 program has restricted "lighter-weight" pilots from flying the plane, which will be used by the the U.S. Air Force, Navy, and Marine Corps, the Royal Air Force, and other U.S. allies. The 34th Fighter Squadron at Hill Air Force Base was the first operational Air Force unit to fly combat-coded F-35s.
Of course, in normal operation, a pilot does not eject; using the ejection seat is a contingency plan for worst-case scenarios. But the story highlights the need to consider the potential problems (risks) that can occur during a contingency plan.
It also shows different levels of risk acceptance: in 2011, the Operational Test and Evaluation office had "serious concerns" about conducting training flights with the ejection seat, but the F-35 program office accepted the risk and went ahead with training.
The Post also reported that the mass of the pilot's helmet increases the risk because it is too heavy.
Until the risk can be mitigated, the F-35 program has restricted "lighter-weight" pilots from flying the plane, which will be used by the the U.S. Air Force, Navy, and Marine Corps, the Royal Air Force, and other U.S. allies. The 34th Fighter Squadron at Hill Air Force Base was the first operational Air Force unit to fly combat-coded F-35s.
Of course, in normal operation, a pilot does not eject; using the ejection seat is a contingency plan for worst-case scenarios. But the story highlights the need to consider the potential problems (risks) that can occur during a contingency plan.
It also shows different levels of risk acceptance: in 2011, the Operational Test and Evaluation office had "serious concerns" about conducting training flights with the ejection seat, but the F-35 program office accepted the risk and went ahead with training.
Tuesday, September 29, 2015
Phones, Boats, Airlines, and Commitment
The Business section of The Washington Post on Sunday, September 27, had four interesting articles about decision making.
Brian Fung discussed the new iPhone and the different ways to pay for it, including payment plans, leases, and traditional contracts (https://www.washingtonpost.com/news/the-switch/wp/2015/09/14/thinking-about-buying-a-new-iphone-read-this-first/). He identified the best plans for four different types of consumers: bargain hunters, early adopters, owners, and traditionalists.
Barry Ritzhold's column on investing (http://www.washingtonpost.com/business/get-there/never-buy-a-boat-and-other-rash-financial-advice/2015/09/25/b788db36-60a7-11e5-8e9e-dce8a2a2a679_story.html) covered financial advice (like "never buy a boat") and then discussed the importance of recognizing four key issues in any big decision: knowledge, costs, skill, and psychology. He proposed the following decision-making approach: (1) understand what you are getting into, (2) carefully consider all of the costs of ownership, (3) only buy what you can afford, and (4) make an intelligent decision about using your limited time and money.
Chico Harlan discussed the American Airlines-US Airways merger and how the combined firm is working hard to manage the risks associated with integrating their operations (http://www.washingtonpost.com/business/the-last-days-of-us-airways/2015/09/25/f5530686-60a6-11e5-8e9e-dce8a2a2a679_story.html). A key decision was retaining the larger reservations system used by American Airlines, which reduced the number of employees who had to learn a new system.
Finally, Michelle Singletary (http://www.washingtonpost.com/business/get-there/the-pope-has-left-dc-but-his-words-remain-we-must-help-the-poor/2015/09/25/c7b423c6-60a7-11e5-8e9e-dce8a2a2a679_story.html) wondered if the Americans who were inspired by Pope Francis will decide to contribute their time and talents to help the less fortunate and discussed the relevant priciples.
Brian Fung discussed the new iPhone and the different ways to pay for it, including payment plans, leases, and traditional contracts (https://www.washingtonpost.com/news/the-switch/wp/2015/09/14/thinking-about-buying-a-new-iphone-read-this-first/). He identified the best plans for four different types of consumers: bargain hunters, early adopters, owners, and traditionalists.
Barry Ritzhold's column on investing (http://www.washingtonpost.com/business/get-there/never-buy-a-boat-and-other-rash-financial-advice/2015/09/25/b788db36-60a7-11e5-8e9e-dce8a2a2a679_story.html) covered financial advice (like "never buy a boat") and then discussed the importance of recognizing four key issues in any big decision: knowledge, costs, skill, and psychology. He proposed the following decision-making approach: (1) understand what you are getting into, (2) carefully consider all of the costs of ownership, (3) only buy what you can afford, and (4) make an intelligent decision about using your limited time and money.
Chico Harlan discussed the American Airlines-US Airways merger and how the combined firm is working hard to manage the risks associated with integrating their operations (http://www.washingtonpost.com/business/the-last-days-of-us-airways/2015/09/25/f5530686-60a6-11e5-8e9e-dce8a2a2a679_story.html). A key decision was retaining the larger reservations system used by American Airlines, which reduced the number of employees who had to learn a new system.
Finally, Michelle Singletary (http://www.washingtonpost.com/business/get-there/the-pope-has-left-dc-but-his-words-remain-we-must-help-the-poor/2015/09/25/c7b423c6-60a7-11e5-8e9e-dce8a2a2a679_story.html) wondered if the Americans who were inspired by Pope Francis will decide to contribute their time and talents to help the less fortunate and discussed the relevant priciples.
Wednesday, September 23, 2015
Traveling in Pennsylvania
Last weekend we used the Pennsylvania Turnpike for our drive to the Pittsburgh area. Because we were headed away from Philadelphia, we were surprised to see roadside emergency signs "warning" us about the visit by Pope Francis. At the Somerset service plaza, we found cards on every table reinforcing the message (see photo). The back of the card had advice about safe driving that would normally be relevant to a severe winter storm.
The Turnpike's website also had this "highway advisory" (the original was in ALL CAPS):
Although the Pope poses no threat, the traffic surrounding his visit will be disruptive, so the Pennsylvania Turnpike staff prepared a standard risk communication message that describes the potential problem, its extent (time and place), and what one can do to mitigate it.
The Turnpike's website also had this "highway advisory" (the original was in ALL CAPS):
To all travelers on the Pennsylvania turnpike Interstate 76, Interstate 276, and Interstate 476. Motorists are advised of the upcoming Pope visit in Philadelphia on September 26 and September 27, 2015. This event is expected to cause significant congestion around the Philadelphia region from the Reading interchange to the Delaware River bridge, and from the Pocono interchange to the mid-county interchange. For travelers not destined for Philadelphia during this time, find an alternate route.
Although the Pope poses no threat, the traffic surrounding his visit will be disruptive, so the Pennsylvania Turnpike staff prepared a standard risk communication message that describes the potential problem, its extent (time and place), and what one can do to mitigate it.
Wednesday, September 16, 2015
Making Laundry Detergent Safer
The Wall Street Journal reported that American manufacturers of laundry detergent packets (like Tide Pods) have agreed to a voluntary standard for the packaging of these packets.
According to the American Association of Poison Control Centers, in the United States, 7,184 children (age 5 and younger) were exposed to single-load laundry packets in the first seven months of 2015 (http://www.aapcc.org/alerts/laundry-detergent-packets/).
The WSJ article described the changes that the manufacturers will make (including tougher packaging and opaque containers).
It also quoted Nancy Cowles, the executive director of Kids in Danger, a nonprofit organization dedicated to protecting children by improving children’s product safety (http://www.kidsindanger.org/about-us/).
She raised an interesting question about risk monitoring: "We are talking absolute numbers-that's what we want to see a drop in. What's important is how many children are being injured, and not the rate of injuries relative to how much companies are selling."
What is the right measure of risk in this case: the number of children injured by laundry detergent packets, the number of injured children per household that uses laundry detergent packets, or the number of injured children per million laundry detergent packets sold?
In other areas, risks are measured per unit of activity: for instance, the European Aviation Safety Agency (EASA) 2013 Annual Safety Review reported both the number of fatal accidents per year and the rate per 10 million flights. Both numbers have been decreasing since 1994, but the rate varies dramatically around the world.
For the article, see http://www.wsj.com/articles/p-g-other-laundry-pod-makers-agree-to-new-safety-standard-1441397456?mod=rss_Business
According to the American Association of Poison Control Centers, in the United States, 7,184 children (age 5 and younger) were exposed to single-load laundry packets in the first seven months of 2015 (http://www.aapcc.org/alerts/laundry-detergent-packets/).
The WSJ article described the changes that the manufacturers will make (including tougher packaging and opaque containers).
It also quoted Nancy Cowles, the executive director of Kids in Danger, a nonprofit organization dedicated to protecting children by improving children’s product safety (http://www.kidsindanger.org/about-us/).
She raised an interesting question about risk monitoring: "We are talking absolute numbers-that's what we want to see a drop in. What's important is how many children are being injured, and not the rate of injuries relative to how much companies are selling."
What is the right measure of risk in this case: the number of children injured by laundry detergent packets, the number of injured children per household that uses laundry detergent packets, or the number of injured children per million laundry detergent packets sold?
In other areas, risks are measured per unit of activity: for instance, the European Aviation Safety Agency (EASA) 2013 Annual Safety Review reported both the number of fatal accidents per year and the rate per 10 million flights. Both numbers have been decreasing since 1994, but the rate varies dramatically around the world.
For the article, see http://www.wsj.com/articles/p-g-other-laundry-pod-makers-agree-to-new-safety-standard-1441397456?mod=rss_Business
Saturday, September 5, 2015
How to build better models with end-user modeling
Many decision-makers rely upon analysts to build models for them, and they get involved in a back-and-forth with the analyst who is supporting the decision-maker by evaluating and ranking alternatives using a mathematical model (like decision analysis or optimization). This iteration (called decision calculus by John Little) can waste a lot time.
In some situations, a better option is end-user modeling, in which the decision-maker builds the model. The approach is quantitative but not analytical, the style is quick and dirty, and the purpose is to gain insight into a decision or problem. (Tom Grossman and Stephen Powell introduced this style of modeling, which exploits the power of spreadsheets. Powell and Baker's textbook describes spreadsheet modeling in detail.)
The end-user modeling process, which accelerates the process of using models to understand a situation, has three steps: (1) plan the model (with the computer off); (2) program the model, and (3) craft the user interface.
For the planning stage, start by identifying the key relationships between the inputs and outputs, sketching the layout of a worksheet to calculate the outputs, and drawing the key charts and graphs that will provide the needed insight.
When programming, build the spreadsheet one section at a time, checking the each section works correctly, and using good spreadsheet programming techniques.
For the user interface, use color and formatting and comments to make it clear how to use the model (one can quickly forget after setting it down for awhile). Clearly identifying the inputs, parameters, calculations, and outputs is very helpful.
As Grossman stated, end-user modeling gives one "the ability to roughly compute the effects of a proposed change ('what-if' modeling)" and "the ability to obtain quick, rough insight on actions that are likely to improve the business."
For further reading:
Grossman, Tom, "End-User Modeling," OR/MS Today, October 1997. Link: http://lionhrtpub.com/orms/orms-10-97/IiE.html
Little, John D.C., “Models and managers: the concept of a decision calculus,” Management Science, Volume 16, Number 8, pages B-466-485, 1970. Link: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.1040.0267
Powell, Stephen G. "The teachers' forum: From intelligent consumer to active modeler, two MBA success stories." Interfaces 27, no. 3 (1997): 88-98. Link: http://pubsonline.informs.org/doi/abs/10.1287/inte.27.3.88
Powell, Stephen G., and Kenneth R. Baker, The Art of Modeling with Spreadsheets, John Wiley & Sons, Inc., Hoboken, New Jersey, 2004. Link: http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP002883.html
In some situations, a better option is end-user modeling, in which the decision-maker builds the model. The approach is quantitative but not analytical, the style is quick and dirty, and the purpose is to gain insight into a decision or problem. (Tom Grossman and Stephen Powell introduced this style of modeling, which exploits the power of spreadsheets. Powell and Baker's textbook describes spreadsheet modeling in detail.)
The end-user modeling process, which accelerates the process of using models to understand a situation, has three steps: (1) plan the model (with the computer off); (2) program the model, and (3) craft the user interface.
For the planning stage, start by identifying the key relationships between the inputs and outputs, sketching the layout of a worksheet to calculate the outputs, and drawing the key charts and graphs that will provide the needed insight.
When programming, build the spreadsheet one section at a time, checking the each section works correctly, and using good spreadsheet programming techniques.
For the user interface, use color and formatting and comments to make it clear how to use the model (one can quickly forget after setting it down for awhile). Clearly identifying the inputs, parameters, calculations, and outputs is very helpful.
As Grossman stated, end-user modeling gives one "the ability to roughly compute the effects of a proposed change ('what-if' modeling)" and "the ability to obtain quick, rough insight on actions that are likely to improve the business."
For further reading:
Grossman, Tom, "End-User Modeling," OR/MS Today, October 1997. Link: http://lionhrtpub.com/orms/orms-10-97/IiE.html
Little, John D.C., “Models and managers: the concept of a decision calculus,” Management Science, Volume 16, Number 8, pages B-466-485, 1970. Link: http://pubsonline.informs.org/doi/abs/10.1287/mnsc.1040.0267
Powell, Stephen G. "The teachers' forum: From intelligent consumer to active modeler, two MBA success stories." Interfaces 27, no. 3 (1997): 88-98. Link: http://pubsonline.informs.org/doi/abs/10.1287/inte.27.3.88
Powell, Stephen G., and Kenneth R. Baker, The Art of Modeling with Spreadsheets, John Wiley & Sons, Inc., Hoboken, New Jersey, 2004. Link: http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP002883.html
Saturday, August 29, 2015
Accepting the Risk of a Derailment
According to an article in The Wall Street Journal, based on documents from the National Transportation Safety Board investigation into the 2014 crude oil train derailment in Lynchburg, Virginia, CSX Corp. knew of a flaw in the section of track where the derailment occurred.
On April 29, 2014, a track inspection had revealed the flaw, and CSX decided to replace a 40-foot piece of track on May 1. The accident, which caused an estimated $1 million in damage, occurred on Wednesday, April 30.
The track inspection indicates that CSX was monitoring the risk of a derailment, and their decision to replace the track segment (given the inspection result) shows that they were reacting to the increased risk (indicated by the precursor: the internal flaw).
Until the NTSB releases its final report, we can propose scenarios that illustrate the difficulty of risk management: The decision to continue using that line before the replacement was done suggests that someone at CSX was willing to accept the derailment risk. Risk mitigation has costs, and greater risk aversion costs more. Perhaps the cost of closing that line (with the consequent disruption to shipping and revenue) for two days was too large.
Another possibility is that those who detected the flaw and scheduled the track replacement failed to communicate the increased risk to the those responsible for the operations on that track.
Monday, August 24, 2015
Deciding to Save New Orleans
Ten years after Hurricane Katrina, the city of New Orleans has done much to mitigate the risk of a hurricane. The August 22 issue of The Washington Post included an article by Chris Mooney (http://www.washingtonpost.com/sf/national/2015/08/21/the-next-big-one/) about an important decision that remains to be made: whether to use sediment diversion to protect the wetlands that protect New Orleans. In addition to slowing the loss of wetlands, the advantages include a relatively low one-time cost and a potential economic value from sportsmen and tourists who would enjoy the wetlands. The key disadvantage is the disruption to the local fishing industry. A key uncertainties are whether the diversions will actually work because there are many factors that influence wetland restoration and the impact of the wetlands on the storm surge may be limited.
The decision-making process appears to be an analytic-deliberative one: a state advisory board has scientific experts, while fishermen have organized a group to oppose the diversions and (if necessary) block construction, and a state agency needs to make a decision before the end of the year.
The decision-making process appears to be an analytic-deliberative one: a state advisory board has scientific experts, while fishermen have organized a group to oppose the diversions and (if necessary) block construction, and a state agency needs to make a decision before the end of the year.
Saturday, August 22, 2015
Mitigating the Risk of Equipment Maintenance
Earlier this month I led a course on engineering risk management for a group of engineers and managers at a manufacturing firm that does sheet metal work and makes a variety of air distribution systems and components. They have numerous machines that use multiple sources of power, which makes equipment maintenance more challenging. They use lockout and tagout (LOTO) procedures (https://www.osha.gov/SLTC/controlhazardousenergy/index.html) but were interested in a systematic procedure for managing the risk associated with equipment maintenance. While covering the process of risk management, the associated activities, and the fundamentals of decision making, we discussed how they could apply these steps to make their equipment maintenance operations safer. The discussion included the potential problems of their current lockout procedures.
The bottom line: establishing and documenting lockout and tagout (LOTO) procedures are useful steps, but they don't replace a systematic risk management process that assesses, analyzes, evaluates, mitigates, and monitors the risks of equipment maintenance. Look for the potential problems, identify the root causes, put in place safeguards that prevent them, and have contingency plans that can react promptly to keep a problem from getting worse.
P.S. I would like to thank the IIE Training Center (http://www.iienet2.org/IIETrainingCenter/Default.aspx) for the opportunity to lead this course. Please contact them if you're interested in a short course on engineering decision making and risk management.
The bottom line: establishing and documenting lockout and tagout (LOTO) procedures are useful steps, but they don't replace a systematic risk management process that assesses, analyzes, evaluates, mitigates, and monitors the risks of equipment maintenance. Look for the potential problems, identify the root causes, put in place safeguards that prevent them, and have contingency plans that can react promptly to keep a problem from getting worse.
P.S. I would like to thank the IIE Training Center (http://www.iienet2.org/IIETrainingCenter/Default.aspx) for the opportunity to lead this course. Please contact them if you're interested in a short course on engineering decision making and risk management.
Tuesday, July 14, 2015
When New Horizons Halted
As the New Horizons spacecraft flies by Pluto today, it is collecting and sending back to Earth data from its many sensors. But this success almost didn't happen.
On Saturday, July 11, The Washington Post had an article about the crisis that occurred just a week earlier (July 4).
The story illustrates a couple of key ideas in decision making and risk management.
First, the loss of contact occurred because the spacecraft was programmed with a contingency plan: if something goes wrong, then go to safe mode: switch to the backup computer, turn off the main computer and other instruments, start a controlled spin to make navigation easier, and start transmitting on another frequency. A contingency plan is a great way to manage risk.
Second, fixing this situation required the New Horizons operations team to manage an "issue," not a "risk," because the problem had already occurred (it was not a potential problem).
Finally, after diagnosing the problem and re-establishing contact with the spacecraft, the team had to make a key decision: whether to stick with the backup computer or switch back to the main computer (which had become overloaded, causing the crisis). Here, they displayed some risk aversion (not surprising considering the one-shot chance to observe Pluto): they went back to the main computer because they "trusted [it], knew its quirks, had tested it repeatedly."
Congratulations to all of the engineers, scientists, and technicians who designed, built, and operate the New Horizons spacecraft!
On Saturday, July 11, The Washington Post had an article about the crisis that occurred just a week earlier (July 4).
The story illustrates a couple of key ideas in decision making and risk management.
First, the loss of contact occurred because the spacecraft was programmed with a contingency plan: if something goes wrong, then go to safe mode: switch to the backup computer, turn off the main computer and other instruments, start a controlled spin to make navigation easier, and start transmitting on another frequency. A contingency plan is a great way to manage risk.
Second, fixing this situation required the New Horizons operations team to manage an "issue," not a "risk," because the problem had already occurred (it was not a potential problem).
Finally, after diagnosing the problem and re-establishing contact with the spacecraft, the team had to make a key decision: whether to stick with the backup computer or switch back to the main computer (which had become overloaded, causing the crisis). Here, they displayed some risk aversion (not surprising considering the one-shot chance to observe Pluto): they went back to the main computer because they "trusted [it], knew its quirks, had tested it repeatedly."
Congratulations to all of the engineers, scientists, and technicians who designed, built, and operate the New Horizons spacecraft!
Saturday, July 4, 2015
Managing the Risk of Fireworks
The Fourth of July is a great opportunity to talk about risk management. Setting off fireworks at home is a popular entertainment, but it is dangerous, as the press reminds us every year: http://www.washingtonpost.com/blogs/govbeat/wp/2015/07/03/here-are-photos-of-all-the-horrific-ways-fireworks-can-maim-or-kill-you/
After assessing the risk, how can one mitigate it? Here are the basic approaches: (1) avoiding the risk by abandoning the planned action or eliminating the root cause or the consequences, (2) reducing the likelihood of the root cause or decreasing its consequences by modifying the planned action or performing preventive measures, (3) transferring the risk to another organization, or (4) assuming (accepting) the risk without mitigating it.
How would these apply to fireworks at home?
1. Avoid the risk: don't do it. Go to a fireworks show or watch one on TV or find something else to do.
2. Reduce the risk: stick to sparklers and party poppers and follow safety guidelines (like these from http://www.cpsc.gov/safety-education/safety-education-centers/fireworks/): keep fireworks away from brush and other substances that can burn, don't let children play with fireworks, keep a bucket of water handy to douse the fireworks or anything that catches fire.
3. Transfer the risk: hire a professional (or other trained expert) to do a fireworks show at your place, or let a neighbor run the show while you and your family watch from a safe distance.
4. Accept the risk: indulge in the tradition!
The relative desirability of these options depends upon how much you like fireworks and how much risk you're willing to accept.
Have a Happy Fourth of July!
Saturday, June 27, 2015
Will it work? The Failure (and Success) of the Little Joe II
When designing a brand-new product or system, an engineer asks (and is asked): "Will it work?" The performance of the design is a major source of uncertainty, and because mission or commercial success depends upon the answer to this fundamental question, information about the its performance is valuable. Testing a prototype is a common way to get this information. If the prototype works, then there is less uncertainty, which makes accepting the design desirable.
During the development of the Apollo spaceflight program, the engineers at NASA and its contractors designed and conducted numerous tests of the many components and systems, especially those critical to the safety of the astronauts. To test the Launch Escape System (LES), the Convair Division of General Dynamics constructed a 86-foot tall rocket named "Little Joe II" and used it to conduct tests at the White Sands Test Facility in New Mexico between 1964 and 1966.
During the test A-003 in May, 1965, the Little Joe II rocket broke apart unexpectedly, which triggered the LES, which worked exactly as it was designed to do. It took the command module (an unmanned boilerplate version) away from the rocket. After reaching 19,000 feet, the LES and command module separated, and the command module's parachute system lowered it to the ground. This surprise was extremely valuable because demonstrated that the LES would work, even under extreme conditions. Although the rocket failed, the test was a success.
For more information, see this section from a NASA publication: http://history.nasa.gov/SP-4205/ch4-2.html. See also a video about the LES and this particular test at https://www.youtube.com/watch?v=AqeJzItldSQ. The Johnson Space Center and the New Mexico Museum of Space History have examples of the Little Joe II rocket.
During the development of the Apollo spaceflight program, the engineers at NASA and its contractors designed and conducted numerous tests of the many components and systems, especially those critical to the safety of the astronauts. To test the Launch Escape System (LES), the Convair Division of General Dynamics constructed a 86-foot tall rocket named "Little Joe II" and used it to conduct tests at the White Sands Test Facility in New Mexico between 1964 and 1966.
During the test A-003 in May, 1965, the Little Joe II rocket broke apart unexpectedly, which triggered the LES, which worked exactly as it was designed to do. It took the command module (an unmanned boilerplate version) away from the rocket. After reaching 19,000 feet, the LES and command module separated, and the command module's parachute system lowered it to the ground. This surprise was extremely valuable because demonstrated that the LES would work, even under extreme conditions. Although the rocket failed, the test was a success.
For more information, see this section from a NASA publication: http://history.nasa.gov/SP-4205/ch4-2.html. See also a video about the LES and this particular test at https://www.youtube.com/watch?v=AqeJzItldSQ. The Johnson Space Center and the New Mexico Museum of Space History have examples of the Little Joe II rocket.
Friday, June 26, 2015
Statistics Views Interview
The Statistics Views website published an interview-style article about the textbook. You can find it at http://www.statisticsviews.com/details/feature/8075931/Engineering-Decision-Making-and-Risk-Management-An-interview-with-author-Jeffrey.html.
Saturday, June 20, 2015
Option Awareness
MITRE organized a Decision Making in Complex Systems Technical Exchange Meeting at their McLean, Virginia, site this week, and the meeting included valuable presentations on modeling complex systems, visualizing their performance, and supporting decision making. (Full disclosure: I was one of the speakers.)
At the meeting, Jill Drury and Gary L. Klein discussed their research on option awareness (OA), which is "the perception and comprehension of the relative desirability of available options, as well the underlying factors and trade-offs that explain that desirability."
Their experimental research has shown that OA decision support tools that present the distribution of performance for each option can help decision-makers select the most robust alternatives, understand the factors that affect their performance, and generate new options.
Their collaborators include Mark Pfaff and others at Indiana University.
For more details and examples of the visualization, see their paper in the Journal of Cognitive Engineering and Decision Making, which can be found at http://www.iupui.edu/~grappa/publications/Supporting_Complex_Decision_Making_Through_OA_Pfaff_et_al_2012_JCEDM.pdf
MITRE also recently hosted the 12th International Naturalistic Decision Making Conference.
The conference website is http://www2.mitre.org/public/ndm/index.html.
At the meeting, Jill Drury and Gary L. Klein discussed their research on option awareness (OA), which is "the perception and comprehension of the relative desirability of available options, as well the underlying factors and trade-offs that explain that desirability."
Their experimental research has shown that OA decision support tools that present the distribution of performance for each option can help decision-makers select the most robust alternatives, understand the factors that affect their performance, and generate new options.
Their collaborators include Mark Pfaff and others at Indiana University.
For more details and examples of the visualization, see their paper in the Journal of Cognitive Engineering and Decision Making, which can be found at http://www.iupui.edu/~grappa/publications/Supporting_Complex_Decision_Making_Through_OA_Pfaff_et_al_2012_JCEDM.pdf
MITRE also recently hosted the 12th International Naturalistic Decision Making Conference.
The conference website is http://www2.mitre.org/public/ndm/index.html.
Tuesday, June 2, 2015
Improving design decision making
Here at the ISERC in Nashville this week, I picked up the January 2014 issue of
IIE Transactions on Occupational Ergonomics and Human Factors (http://www.tandfonline.com/toc/uehf20/current) and found the article "Adapting Engineering Design Tools to Include Human Factors" by Judy Village et al.
The article describes how researchers at Ryerson University (in Toronto) worked with human factors specialists and engineers at a large electronics manufacturer to change how that firm designs the assembly process for its new products. The changes led to design tools that help the firm's engineers consider human factors issues during the design process to make assembly easier, safer, and faster. That is, the changes modified the objectives and constraints used to make assembly process design decisions. For example, the design must satisfy a human factors target by scoring well on 22 items related to human factors.
In addition, the process used to develop these new design tools is interesting. According to the article, "an action research approach was used, where researchers were embedded in the organization and together took action" to plan, implement, and improve the tools. This process emphasized understanding the design process and its metrics, tools, and language and then identifying opportunities to improve the design tools with feasible, desirable changes (that is, the changes had to "fit the design process" and "provide important metrics for business performance").
Although the new design tools may be specific to this firm, the process used can be applied elsewhere. The authors state that their contribution includes "the lessons learned about the process of adapting internal engineering tools"; that is, they have showed how an organization can improve design decision making.
IIE Transactions on Occupational Ergonomics and Human Factors (http://www.tandfonline.com/toc/uehf20/current) and found the article "Adapting Engineering Design Tools to Include Human Factors" by Judy Village et al.
The article describes how researchers at Ryerson University (in Toronto) worked with human factors specialists and engineers at a large electronics manufacturer to change how that firm designs the assembly process for its new products. The changes led to design tools that help the firm's engineers consider human factors issues during the design process to make assembly easier, safer, and faster. That is, the changes modified the objectives and constraints used to make assembly process design decisions. For example, the design must satisfy a human factors target by scoring well on 22 items related to human factors.
In addition, the process used to develop these new design tools is interesting. According to the article, "an action research approach was used, where researchers were embedded in the organization and together took action" to plan, implement, and improve the tools. This process emphasized understanding the design process and its metrics, tools, and language and then identifying opportunities to improve the design tools with feasible, desirable changes (that is, the changes had to "fit the design process" and "provide important metrics for business performance").
Although the new design tools may be specific to this firm, the process used can be applied elsewhere. The authors state that their contribution includes "the lessons learned about the process of adapting internal engineering tools"; that is, they have showed how an organization can improve design decision making.
Wednesday, May 27, 2015
Decision Making and the Panama Canal (Part II: the Americans)
(For Part I, see http://engineeringdecisionmaking.blogspot.com/2015/05/decision-making-and-panama-canal-part-i.html)
After the United States gained control of the Panama Canal effort in 1904, the type of canal was not yet specified. The two alternatives were a sea-level canal and a lock canal.
First, President Theodore Roosevelt appointed thirteen civil engineers to the International Board of Consulting Engineers and told them that the two most important attributes were the speed of construction and the likelihood of successful completion. After conducting their research, eight members voted for a sea-level canal, and five voted for a lock canal. The chairman of Isthmian Canal Commission (ICC), which ran the Canal Zone, and the chief engineer, recommended a lock canal and gave sound technical and financial reasons for their disinterested choice. Eventually, the Senate approved a lock canal. The House of Representatives then concurred, and the president confirmed the choice on June 29, 1906.
To coordinate their experienced personnel, appropriate equipment, and the ingenious system for moving material, the Americans had a centralized organization with authority over every aspect of and employee in the Canal Zone. There were no contractors working in Panama. The headquarters made a detailed plan every day for coordinating the drilling, blasting, excavating, and dumping operations to maximize productivity. Moreover, the motivated employees felt that they were part of a community, which improved morale and productivity.
The canal was officially opened on August 15, 1914. The American decision-making process was a more effective analytic-deliberative process that was focused on clear objectives (not personal ambition), and their decision-making system in Panama was more appropriate than the French scheme.
After the United States gained control of the Panama Canal effort in 1904, the type of canal was not yet specified. The two alternatives were a sea-level canal and a lock canal.
First, President Theodore Roosevelt appointed thirteen civil engineers to the International Board of Consulting Engineers and told them that the two most important attributes were the speed of construction and the likelihood of successful completion. After conducting their research, eight members voted for a sea-level canal, and five voted for a lock canal. The chairman of Isthmian Canal Commission (ICC), which ran the Canal Zone, and the chief engineer, recommended a lock canal and gave sound technical and financial reasons for their disinterested choice. Eventually, the Senate approved a lock canal. The House of Representatives then concurred, and the president confirmed the choice on June 29, 1906.
To coordinate their experienced personnel, appropriate equipment, and the ingenious system for moving material, the Americans had a centralized organization with authority over every aspect of and employee in the Canal Zone. There were no contractors working in Panama. The headquarters made a detailed plan every day for coordinating the drilling, blasting, excavating, and dumping operations to maximize productivity. Moreover, the motivated employees felt that they were part of a community, which improved morale and productivity.
The canal was officially opened on August 15, 1914. The American decision-making process was a more effective analytic-deliberative process that was focused on clear objectives (not personal ambition), and their decision-making system in Panama was more appropriate than the French scheme.
Tuesday, May 19, 2015
Decision Making and the Panama Canal (Part I: the French)
The Panama Canal, which opened in 1914, is one of the most outstanding engineering successes in history. The history of the Panama Canal has many stories of heroic explorers, brilliant engineers, and tireless laborers. It also includes critical choices by rational, intelligent decision-makers. Unfortunately, the French decision-making processes and decision-making system contributed to their failure.
An early critical choice was whether to build a sea-level canal or a lock canal. Ferdinand de Lesseps was a French diplomat who led the effort to build the Suez Canal and became part of a group interested in building a canal across Panama. In 1879 he organized a meeting in Paris to evaluate the options for a canal across Central America. De Lesseps was determined that the sea-level route across Panama should be approved, however, and he personally convinced many French delegates to support that alternative, which helped him and his business associates raise money to build the canal.
Approximately 100 small subcontractors worked to dig the canal, but, without central coordination, they impeded each other’s efforts to remove the excavated dirt and rock. Moreover, the contractors chose the simplest (cheapest) way to dump the excavated dirt and rock, and these operations were often stopped by storms, which slowed the excavation of the canal. They were not loyal to the canal company or motivated by its goals, and there was no central office to coordinate their activities.
The French effort was bankrupt within 10 years.
For more about the canal's history, check out The Canal Builders by Julie Greene and The Path Between the Seas by David McCullough.
Next time: the Americans.
An early critical choice was whether to build a sea-level canal or a lock canal. Ferdinand de Lesseps was a French diplomat who led the effort to build the Suez Canal and became part of a group interested in building a canal across Panama. In 1879 he organized a meeting in Paris to evaluate the options for a canal across Central America. De Lesseps was determined that the sea-level route across Panama should be approved, however, and he personally convinced many French delegates to support that alternative, which helped him and his business associates raise money to build the canal.
Approximately 100 small subcontractors worked to dig the canal, but, without central coordination, they impeded each other’s efforts to remove the excavated dirt and rock. Moreover, the contractors chose the simplest (cheapest) way to dump the excavated dirt and rock, and these operations were often stopped by storms, which slowed the excavation of the canal. They were not loyal to the canal company or motivated by its goals, and there was no central office to coordinate their activities.
The French effort was bankrupt within 10 years.
For more about the canal's history, check out The Canal Builders by Julie Greene and The Path Between the Seas by David McCullough.
Next time: the Americans.
Thursday, May 14, 2015
Columbia's Final Mission
In my Engineering Decision Making and Risk Management class last week we discussed the space shuttle Columbia accident using the case study Columbia's Final Mission by Michael Roberto et al. (http://www.hbs.edu/faculty/Pages/item.aspx?num=32441).
The case study highlights topics related to making decisions in the presence of ambiguous threats, including the nature of the response, organizational culture, and accountability.
It also discusses the results of the Columbia Accident Investigation Board (http://www.nasa.gov/columbia/home/CAIB_Vol1.html).
When discussing the case in class, a key activity is re-enacting a critical Mission Management Team (MMT) meeting, which gives students a chance to identify opportunities to improve decision making.
My class also discussed the design of warning systems, risk management, risk communication, different decision-making processes, and problems in decision making, all of which reinforced the material in the textbook (http://www.isr.umd.edu/~jwh2/textbook/index.html).
We concluded that the structure of the MMT made effective risk communication difficult and key NASA engineers and managers failed to describe the risk to those in charge.
Moreover, the decision-makers used a decision-making process that prematurely accepted a claim that the foam strike would cause only a turn-around problem, not a safety-of-flight issue, and this belief created another barrier to those who were concerned about the safety of the astronauts and wanted more information.
Failures such as the Columbia accident are opportunities to learn, and case studies are a useful way to record and transmit information about failures, which is essential to learning.
We learned how ineffective risk communication and poor decision-making processes can lead to disaster.
The case study highlights topics related to making decisions in the presence of ambiguous threats, including the nature of the response, organizational culture, and accountability.
It also discusses the results of the Columbia Accident Investigation Board (http://www.nasa.gov/columbia/home/CAIB_Vol1.html).
When discussing the case in class, a key activity is re-enacting a critical Mission Management Team (MMT) meeting, which gives students a chance to identify opportunities to improve decision making.
My class also discussed the design of warning systems, risk management, risk communication, different decision-making processes, and problems in decision making, all of which reinforced the material in the textbook (http://www.isr.umd.edu/~jwh2/textbook/index.html).
We concluded that the structure of the MMT made effective risk communication difficult and key NASA engineers and managers failed to describe the risk to those in charge.
Moreover, the decision-makers used a decision-making process that prematurely accepted a claim that the foam strike would cause only a turn-around problem, not a safety-of-flight issue, and this belief created another barrier to those who were concerned about the safety of the astronauts and wanted more information.
Failures such as the Columbia accident are opportunities to learn, and case studies are a useful way to record and transmit information about failures, which is essential to learning.
We learned how ineffective risk communication and poor decision-making processes can lead to disaster.
Monday, May 4, 2015
Nepal's Earthquake Risk
The recent earthquake in Nepal, one of the poorest countries in the
world, is a horrible disaster. The potential for injuries, death, and
destruction were well-known. Coincidentally, last week, a student in my
engineering decision making and risk management course submitted an
assignment summarizing a 2000 report on this topic written by experts
from Nepal and California.
The report is Amod M. Dixit, Laura R. Dwelley-Samant, Mahesh Nakarmi, Shiva B. Pradhanang, and Brian E. Tucker. "The Kathmandu Valley earthquake risk management project: an evaluation." Asia Disaster Preparedness Centre, 2000.
It can be found at http://www.iitk.ac.in/nicee/wcee/article/0788.pdf.
The report stated that "a devastating earthquake is inevitable in the long term and likely in the near future." Indeed, the report cited data that earthquakes with magnitude of more than 8 on the Richter scale occur in that region approximately every 81 years. The Nepal-Bihar earthquake (magnitude 8.2) was in 1934 (81 years ago).
The report described various factors that increase the earthquake risk in Nepal, including the high probability of liquefaction due to local soil conditions, poorly constructed dwellings that are more likely to fail and "a tendency in the general population to ignore the earthquake hazard due to more immediate needs."
The project described by the report emphasized awareness-raising as part of creating institutions that would work to reduce the earthquake risk. Increasing awareness depended upon sharing information about the earthquake risk, including estimates about the potential loss of life. The authors reported that this risk communication "did not create any panic in the population. It rather made a larger part of the society wanting to improve the situation. This leads us to believe that the traditional belief of possible generation of panic should not be used as an excuse for not releasing information on risk."
In addition to the report, additional information about the Kathmandu Valley Earthquake Risk Management project can be found online at
http://geohaz.org/projects/kathmandu.html.
The report is Amod M. Dixit, Laura R. Dwelley-Samant, Mahesh Nakarmi, Shiva B. Pradhanang, and Brian E. Tucker. "The Kathmandu Valley earthquake risk management project: an evaluation." Asia Disaster Preparedness Centre, 2000.
It can be found at http://www.iitk.ac.in/nicee/wcee/article/0788.pdf.
The report stated that "a devastating earthquake is inevitable in the long term and likely in the near future." Indeed, the report cited data that earthquakes with magnitude of more than 8 on the Richter scale occur in that region approximately every 81 years. The Nepal-Bihar earthquake (magnitude 8.2) was in 1934 (81 years ago).
The report described various factors that increase the earthquake risk in Nepal, including the high probability of liquefaction due to local soil conditions, poorly constructed dwellings that are more likely to fail and "a tendency in the general population to ignore the earthquake hazard due to more immediate needs."
The project described by the report emphasized awareness-raising as part of creating institutions that would work to reduce the earthquake risk. Increasing awareness depended upon sharing information about the earthquake risk, including estimates about the potential loss of life. The authors reported that this risk communication "did not create any panic in the population. It rather made a larger part of the society wanting to improve the situation. This leads us to believe that the traditional belief of possible generation of panic should not be used as an excuse for not releasing information on risk."
In addition to the report, additional information about the Kathmandu Valley Earthquake Risk Management project can be found online at
http://geohaz.org/projects/kathmandu.html.
Tuesday, March 31, 2015
California's offshore oil and gas platforms
An article by Max Henrion in the February 2015 issue of OR/MS Today described the decision analysis used to determine the best option for decommissioning 27 offshore oil and gas platforms off the coast of Southern California. A complete report on the analysis can be found here.
The article illustrates two of the three critical perspectives on decision making: (1) the problem-solving perspective (what do with the platforms) and (2) the decision-making process perspective (how to make the decision).
From the problem-solving perspective, the decision is actually 27 decisions, one for each platform. The article lists multiple options in three categories: complete removal, partial removal, and leave in place for reuse. Within each category were multiple alternatives.
The attributes used to evaluate the alternatives were costs, air quality, water quality, impacts on marine mammals, impacts on birds, impacts on the benthic zone, fish production, ocean access, and compliance with lease terms.
Based on the stakeholders' preferences, the analysts created a multi-attribute model. For each platform, each decommissioning alternative was given a score (on a 0 to 100 scale) for each attribute, and the scores were combined using a weighted sum. The alternative with the best total score was identified as the best for that platform.
From the decision-making process perspective, the process was an analytic-deliberative one, and the analysis involved many traditional tools, including influence diagrams, decision trees, sensitivity analysis, and swing weighting.
A multidisciplinary analysis team began by identifying a wide range of options but determined that some were technically or legally infeasible. They then evaluated the remaining ones in more detail.
This included creating quantitative models to determine how decommissioning would affect fish production and ocean access. They constructed a computer program that lets a user update the scores and weights. They conducted sensitivity analysis to determine how uncertainties in costs and the impact of changing the weights on the attributes. The most influential factor was the weight on compliance; a higher weight on compliance increased the desirability of complete removal.
After completing its analysis, the team then issued its report to its client and released its model to the stakeholders. The deliberative part of the process included a series of meetings with stakeholders and the public and policy discussions that led to legislation that enables partial removal, the alternative that reduces both environmental impacts and costs.
I recommend the article as a case study of the analytic-deliberative process and an illustration of how decision analysis tools can be used.
The article illustrates two of the three critical perspectives on decision making: (1) the problem-solving perspective (what do with the platforms) and (2) the decision-making process perspective (how to make the decision).
From the problem-solving perspective, the decision is actually 27 decisions, one for each platform. The article lists multiple options in three categories: complete removal, partial removal, and leave in place for reuse. Within each category were multiple alternatives.
The attributes used to evaluate the alternatives were costs, air quality, water quality, impacts on marine mammals, impacts on birds, impacts on the benthic zone, fish production, ocean access, and compliance with lease terms.
Based on the stakeholders' preferences, the analysts created a multi-attribute model. For each platform, each decommissioning alternative was given a score (on a 0 to 100 scale) for each attribute, and the scores were combined using a weighted sum. The alternative with the best total score was identified as the best for that platform.
From the decision-making process perspective, the process was an analytic-deliberative one, and the analysis involved many traditional tools, including influence diagrams, decision trees, sensitivity analysis, and swing weighting.
A multidisciplinary analysis team began by identifying a wide range of options but determined that some were technically or legally infeasible. They then evaluated the remaining ones in more detail.
This included creating quantitative models to determine how decommissioning would affect fish production and ocean access. They constructed a computer program that lets a user update the scores and weights. They conducted sensitivity analysis to determine how uncertainties in costs and the impact of changing the weights on the attributes. The most influential factor was the weight on compliance; a higher weight on compliance increased the desirability of complete removal.
After completing its analysis, the team then issued its report to its client and released its model to the stakeholders. The deliberative part of the process included a series of meetings with stakeholders and the public and policy discussions that led to legislation that enables partial removal, the alternative that reduces both environmental impacts and costs.
I recommend the article as a case study of the analytic-deliberative process and an illustration of how decision analysis tools can be used.
Thursday, March 5, 2015
Hardcover version available in April.
A hardcover version of the textbook will be available on April 13, 2015. You can find it on Amazon and the Wiley web site (the source of this image).
Saturday, February 21, 2015
Mitigating the risks from small UAVs
Earlier this week, the FAA released proposed rules for operators of small UAVs (drones). Small UAVs are those that weigh less than 55 pounds.
From a risk management perspective, the rules propose a variety of preventive actions: fly only in daylight, do not fly over people, do not fly in bad weather, fly below 500 feet altitude, fly at speeds less than 100 miles per hour, and do not fly in airport flight paths and restricted airspace areas. Most of the rules are meant to prevent accidents by reducing the likelihood of losing control, colliding with other aircraft, and crashing into third persons on the ground.
The rules do not propose any contingency plans to minimize risk when the operator loses control of the UAV. Indeed, perhaps the only ones that can be considered are fanciful ones like a "disassemble" command that makes the UAV divide into smaller pieces that would cause less damage on impact or deploying airbags like those on the Mars Pathfinder. A more reasonable contingency might be a siren and flashing light that are activated to warn those nearby when the UAV begins to crash (like shouting "fore" when a golfer drives a ball towards unsuspecting bystanders).
From a risk management perspective, the rules propose a variety of preventive actions: fly only in daylight, do not fly over people, do not fly in bad weather, fly below 500 feet altitude, fly at speeds less than 100 miles per hour, and do not fly in airport flight paths and restricted airspace areas. Most of the rules are meant to prevent accidents by reducing the likelihood of losing control, colliding with other aircraft, and crashing into third persons on the ground.
The rules do not propose any contingency plans to minimize risk when the operator loses control of the UAV. Indeed, perhaps the only ones that can be considered are fanciful ones like a "disassemble" command that makes the UAV divide into smaller pieces that would cause less damage on impact or deploying airbags like those on the Mars Pathfinder. A more reasonable contingency might be a siren and flashing light that are activated to warn those nearby when the UAV begins to crash (like shouting "fore" when a golfer drives a ball towards unsuspecting bystanders).
Tuesday, February 17, 2015
Compensation and choice strategies for screening
In multicriteria decision situations, many choice strategies are used for selecting one alternative from the set of available alternatives, which is the essence of decision making. In some cases, however, the decision-maker first wants to screen the available alternatives to find a subset to consider in more detail. Choice strategies can be used for screening as well as for selection. In particular, the satisficing and disjunctive choice strategies are suited for screening. The satisficing strategy sets a threshold or cutoff for each attribute and keeps only alternatives that satsify all of those thresholds. The disjunctive strategy also sets a threshold for each attribute, but it keeps any alternative that satisfies at least one of those thresholds.
These strategies correspond to preferences for non-compensating and compensating solutions. A decision-maker who prefers non-compensating solutions will want solutions that are good in every way, which the satisficing strategy identifies. A decision-maker who prefers compensating solutions may set higher thresholds and use a disjunctive strategy to screen the alternatives. Although no alternative can satisfy all of the thresholds, the decision-maker will be happy with those alternatives that can satisfy at least one of them because they are compensating solutions: each one performs well on at least one attribute, which compensates for poor performance on the other attributes.
For example, suppose Joe and Rose are considering which college each one should attend. Joe prefers non-compensating solutions and uses a satisficing strategy: he wants a college that has a reasonable tuition, that is at most 200 miles from home, AND has a top-40 engineering program. He will narrow his search to colleges that meet all of those criteria. Rose, however, prefers compensating solutions and uses a disjunctive strategy: she wants a college that is very inexpensive OR within 75 miles of home OR has a top-10 engineering program. Her criteria are more difficult to meet, but she will narrow her search to those colleges that can meet any one of her criteria, which will be a very different set than those that Joe considers. For instance, Joe would not consider a top-10 school that is far away and expensive, but Rose would.
These strategies correspond to preferences for non-compensating and compensating solutions. A decision-maker who prefers non-compensating solutions will want solutions that are good in every way, which the satisficing strategy identifies. A decision-maker who prefers compensating solutions may set higher thresholds and use a disjunctive strategy to screen the alternatives. Although no alternative can satisfy all of the thresholds, the decision-maker will be happy with those alternatives that can satisfy at least one of them because they are compensating solutions: each one performs well on at least one attribute, which compensates for poor performance on the other attributes.
For example, suppose Joe and Rose are considering which college each one should attend. Joe prefers non-compensating solutions and uses a satisficing strategy: he wants a college that has a reasonable tuition, that is at most 200 miles from home, AND has a top-40 engineering program. He will narrow his search to colleges that meet all of those criteria. Rose, however, prefers compensating solutions and uses a disjunctive strategy: she wants a college that is very inexpensive OR within 75 miles of home OR has a top-10 engineering program. Her criteria are more difficult to meet, but she will narrow her search to those colleges that can meet any one of her criteria, which will be a very different set than those that Joe considers. For instance, Joe would not consider a top-10 school that is far away and expensive, but Rose would.
Friday, January 9, 2015
The GAO reviewed NASA's decision
GAO occasionally reviews contracting decisions by agencies of the federal government, and their reports provide some insights into how these agencies make their decisions (and sometimes how they make mistakes).
On January 5, 2015, the GAO released a statement about its review of NASA's decision to award Commercial Crew Transportation Capability Contracts to Boeing and SpaceX. As usual, the GAO did not rule on the merits of the firms' proposals; it limited itself to reviewing the decision process. In this case, it "reviewed the conclusions reached by NASA to determine if they were reasonable, and consistent with the evaluation approach NASA set out in its solicitation. " GAO concluded that they were. (Unfortunately, the details of the decision are not yet available.)
On January 5, 2015, the GAO released a statement about its review of NASA's decision to award Commercial Crew Transportation Capability Contracts to Boeing and SpaceX. As usual, the GAO did not rule on the merits of the firms' proposals; it limited itself to reviewing the decision process. In this case, it "reviewed the conclusions reached by NASA to determine if they were reasonable, and consistent with the evaluation approach NASA set out in its solicitation. " GAO concluded that they were. (Unfortunately, the details of the decision are not yet available.)
Subscribe to:
Posts (Atom)