Showing posts with label risk management. Show all posts
Showing posts with label risk management. Show all posts

Friday, October 22, 2021

The Price of Safety

 

A recent paper by Chao Chen, Genserik Reniers, Nima Khakzad, and Ming Yang discusses safety economics.  Safety economics is concerned with the costs of safety measures, and an important objective is to minimize the sum of two costs:  (1) the expected cost of the harms due to accidents in the future and (2) the current and future cost of safety measures.  Safety economics is a tool for making "decisions that are as good as possible (or 'optimal')" in order to both optimize the use of resources and maximize safety.  The paper discusses the importance of cost modeling, which includes direct costs, indirect costs, and "non-economic costs" that need to be monetized.  The value of statistical life and willingness to pay are mentioned in this context.

A common approach in safety economics is risk-based safety optimization, which is a type of risk management process.  This includes hazard identification, risk analysis, risk evaluation (i.e., is the risk acceptable), and risk mitigation.  The last step is accomplished by safety cost optimization, which evaluates the costs of the different safety strategies and selects the one with minimal cost.

The paper also discusses the minimal total safety cost approach (which considers both the safety strategy cost and the potential accident cost), cost-benefit analysis, cost-effectiveness analysis, multi-objective optimization, and game theory approaches.

To me the variety of approaches suggests that one must first engage in metareasoning to decide which decision-making process should be used.  Moreover, all of the approaches require human input in the form of setting thresholds (for risk acceptance criteria or cost-effectiveness ratios), weighing criteria, and making tradeoffs.  In practice, as with many decision models, a "decision calculus" (Little, 1970) may emerge in which the decision-maker asks the analyst to "find the solution," but these two people iterate as the decision-maker asks "what if?" in response to the results that the analyst generates.

Finally, the paper's focus on minimizing costs suggests that safety economics is based on substantive rationality, in which a decision-maker should choose the optimal alternative (Stirling, 2003).  Because bounded rationality better describes human decision-making, approaches that focus on finding satisfactory (not necessarily optimal) solutions may be more practical (Simon, 1981).

Cited sources:
Chen, Chao, Genserik Reniers, Nima Khakzad, and Ming Yang, "Operational safety economics: Foundations, current approaches and paths for future research," Safety Science, Volume 141, 2021.
Little, John D.C., “Models and managers: the concept of a decision calculus,” Management Science, Volume 16, Number 8, pages B-466-485, 1970.
Simon, Herbert A., The Sciences of the Artificial, second edition, The MIT Press, Cambridge, Massachusetts, 1981.
Stirling, Wynn C., Satisficing Games and Decision Making, Cambridge University Press, Cambridge, 2003.
 

Image source: https://www.gov.uk/government/news/venues-required-by-law-to-record-contact-details

Monday, November 23, 2020

Tuning coronavirus exposure warnings

 In The Washington Post, Geoffrey A. Fowler described a smartphone app that alerts a user when the user has been in close contact with someone who reports a positive COVID-19 test result.  The app uses Bluetooth technology to track which other smartphones one has been near (but not the physical locations of the contacts).

 Fowler's article also mentioned the problem of false positives, a typical problem in the design of any warning system.  The system is meant to alert a user if they have had a close contact with someone who tests positive.  To do that, the app must define a "close contact" based on the signal strength (stronger = closer) and the duration of the contact.  If these parameters are set too "low" so that weak signals and short encounters are considered "close contacts," then a user may get too many alerts (false positives); on the other hand, if the parameters are set too "high," then a user may not receive an alert about a risky encounter (a false negative).  

The article quoted Jenny Wanger of the Linux Foundation Public Health, who said, "We are working as a community to optimize it and to figure out how to get those settings to be in the right place so that we do balance the risk of false positives with the getting notifications out to people who are at risk."

Note that an alert here is not a positive test result for the user, only a warning that one was near someone with a positive test result and thus may have been exposed.  The costs of false positives and false negatives are subjective, of course.  At this point in the pandemic, a false positive, which may cause a user to quarantine or limit his activities unnecessarily, may be more costly that a false negative for someone who is taking precautions while doing typical activities and is likely having many brief, low-risk encounters.  This type of user may prefer to know only about really close contacts that have a higher risk of transmission.

The opposite may be true for someone who is significantly at risk for becoming seriously ill and has very few contacts in a typical day.  Then, a more sensitive (but less specific) system may be more appropriate.

Thus, it would be useful for users to have the ability to set the warning threshold based on their risk preferences, similar to the way financial advisors ask investors about their risk tolerance.



Wednesday, November 27, 2019

The Risk of Space Junk

Image credit: NASA


The November 2019 issue of Prism (published by ASEE) includes an article by Thomas Grose about the risks associated with space junk in low earth orbit (LEO).  The likelihood of a collision continues to increase: 4000 satellites and the International Space Station (ISS) travel in LEO, which has 128 million bits of junk; 20,000 pieces have a diameter greater than 10 cm.  The consequences of a collision could be more debris, a damaged satellite (which could interrupt communication or other services), or a casualty on the ISS.   A possible worst case is the Kessler syndrome (a chain reaction of debris-generating collisions).

The current mitigation efforts include rules to reduce the growth of space junk and a system for detecting possible collisions (so that spacecraft can be moved out of the way).  NASA has a technical standard, "Process for Limiting Orbital Debris," that requires space programs to assess and limit the likelihood of generating debris during normal operations, explosions, intentional breakups, and collisions.  "Orbital debris" is defined as follows:
Artificial objects, including derelict spacecraft and spent launch vehicle orbital stages, left in orbit which no longer serve a useful purpose. In this document, only debris of diameter 1 mm and larger is considered.
The NASA standard also discusses reducing the likelihood of collision by reducing a spacecraft's cross-sectional area.

New systems for tracking more space junk more precisely (e.g., the Space Fence) could lead to an automated "traffic control" system that warns a spacecraft operator when a collision is imminent while reducing the likelihood of false alarms.  An alarm is costly because it disrupts normal operations, and the spacecraft must burn fuel to move away from the space junk and then return to its normal position.

Researchers are also developing spacecraft that can capture space junk, which could reduce the likelihood of a collision.

The article mentions few efforts to reduce the consequences of a collision.  Astronauts in the ISS can head to a shelter if a close call is imminent.  But hardening a satellite would require more mass, which makes it more expensive.  Perhaps we need "shatterproof" materials or designs for spacecraft.

Tuesday, October 15, 2019

Robust Multiple-Vehicle Route Planning

Planning problems with multiple vehicles occur in many settings, including the well-known vehicle routing problem (VRP) and drone delivery operations, including the flying sidekick problem.   Most approaches to these problems assume that the vehicles are reliable and will not fail.

In the real world, however, a vehicle could fail, in which case the other vehicles would have to change their routes to cover the locations that the failed vehicle did not visit.  In recent research done at the University of Maryland, Ruchir Patel studied this challenge and developed approaches for generating robust routes for multiple-vehicle operations.  His thesis, Multi-Vehicle Route Planning for Centralized and Decentralized Systems, describes the results of his research.  The key idea is to consider the worst-case performance of a solution instead of the best-case performance.  A solution with better worst-case performance is more robust and will perform well even if a vehicle fails.

He found that a genetic algorithm (GA) could find high quality solutions, but the computational effort was substantial because evaluating the robustness of a solution required evaluating all possible failure scenarios and generating a recovery plan for each one.  His approach used Prim's Algorithm to generate a minimum spanning tree and construct a recovery plan quickly.  Although the computational effort may be acceptable for pre-mission planning, faster approaches could be useful for real-time, online planning.

Monday, August 26, 2019

Managing the risk of wildfires

In the August 25, 2019, issue of the The Washington Post, Steven Pearlstein wrote a column about how San Diego Gas & Electric (SDG&E) is pro-actively managing the risk that their power grid will cause a wildfire.  Some wildfires are caused when trees or branches fall and hit a transmission line and then the energized power line sparks a fire on the ground.  The power company's approach addresses two potential problems: damage to the transmission line, and the transmission line sparking a wildfire.

For the first risk, SDG&E is burying transmission lines in high-risk areas in the mountains and backcountry.  This is a preventive action because buried transmission lines can not be damaged by falling trees and branches.

For the second risk, SDG&E has installed a system that monitors transmission lines in remote areas, detects when a transmission line is damaged, and turns off the power to that transmission line instantly.  This contingency plan (turn off the power if the line is damaged) prevents the lines from sparking a wildfire.

Moreover, SDG&E is monitoring the conditions in the high-risk area and calculating the conditional probability that a spark would lead to a large wildfire in those locations.  If that conditional probability becomes too high, then the risk has increased to an unacceptable level, and the power company cuts the power to that transmission line (even though it is not damaged).  From Pearlstein's column:
“We are doing with wildfires what the National Weather Service has done with hurricanes,” said Scott Drury, the utility’s president. ... The wildfire model gives SDG&E the confidence to shut off power because it can pinpoint precisely where and when the risks are at levels that have resulted in disastrous fires in the past.
Pearlstein's article also discusses how people in the area are managing the risks associated with a power outage: parking outside their garages, storing water, and buying generators.

Saturday, August 18, 2018

Improving Decision-Making Processes

The WomenCorporateDirectors Foundation’s Thought Leadership Commission and the KPMG Board Leadership Center issued a report that describes some of the problems that can cause poor decision making and recommends improving decision-making processes.  The report discusses incomplete information, groupthink, overconfidence, and other poor practices as causes of decision-making failures. 

The report presents five decision-making styles (first presented in an article by Dan Lovallo and Olivier Sibony):
  • Visionary,
  • Guardian,
  • Motivator,
  • Flexible, and
  • Catalyst.

Each style has its strengths and weaknesses.  I would add that a decision-maker needs to use the right decision-making style for the decision-context that is present (see more about this at this blog post). 

The report also discusses ways to create a more inquisitive, risk-based decision-making process by considering multiple viewpoints, identifying the pros and cons of every alternative, and discussing the associated risks (what could go wrong).  The resulting process will resemble the "discovery decision-making process" described by Paul Nutt.

Finally, the report recommends evaluating the decision-making process, not its outcomes, as a way to identify opportunities for improvement.

HT: ISE Magazine

Wednesday, July 12, 2017

Japanese food that minimizes the risk of choking

NPR posted a piece about food that minimizes the risk of choking.  In Japan, more people die from choking than from traffic accidents, and the difficulty that elderly person have swallowing is a leading cause of choking.  The cooked food is pureed and then re-formed (with a thickener) into a dish that looks like regular food but is easier to swallow (no chewing required).

Meanwhile, The Washington Post had an article about the importance of knowing the Heimlich maneuver and CPR, both of which can help someone who is choking. 

These highlight both sides of managing risk: (1) preventing a potential problem (choking) by eating foods that are less likely to cause choking, and (2) following a contingency plan (the Heimlich maneuver) if someone does start choking.

Bill Murray, who played a weatherman who saves someone from choking in the 1993 movie Groundhog Day, saved a man from choking in a Phoenix restaurant in 2016 by using the Heimlich maneuver, which he learned while making the movie.

Saturday, September 17, 2016

How to prevent smartphone battery fires



The battery overheating problems of the Samsung Galaxy Note 7 have led to various efforts to mitigate the risk:
  • The company recalled the smartphone, so consumers should return or exchange it.
  • Airlines have banned the use of the smartphone or placing it in checked luggage on their flights.
  • A software update from Samsung will limit battery charging to 60% of capacity.
  • Some tech advice websites suggest disabling applications and background processes, limiting the use of games and web browsing, not using the device while recharging the battery, and avoiding third-party replacement batteries.

These suggestions range from risk avoidance to reducing the risk of overheating to reducing the consequences of overheating.  Many uses may simply accept the risk.

Links:
CPSC recall announcement: http://www.cpsc.gov/en/Recalls/2016/Samsung-Recalls-Galaxy-Note7-Smartphones/
Bloomberg article about software update: http://www.bloomberg.com/news/articles/2016-09-14/samsung-limits-note-7-battery-charging-to-prevent-overheating
Tips: http://gadgetstouse.com/gadget-tech/avoid-overheating-problems-smartphone/18946
Photo: A Samsung Galaxy S3 that exploded in 2013, from Le Matin.

Wednesday, May 4, 2016

The Wright Brothers

I just finished reading The Wright Brothers, by David McCullough.  It is a great story about two determined Americans who, like many engineers before and after, did their research and then designed, built, tested, improved, demonstrated, and marketed a remarkable invention.
The book describes not only their development process but also their personalities and family and friends, and I highly recommend it, but this is not a book review.

Flying is a risky endeavor, and Orville and Wilbur Wright made some key decisions to reduce personal and financial risk.  They decided to conduct their early glider and airplane tests at Kitty Hawk, North Carolina, which had not only good wind conditions for flying but also soft sand for landing.  They also decided to never fly together to avoid the risk that a crash would kill them both (only once, after many successful flights, did they fly together near their home in Dayton, Ohio).  They decided to keep their bicycle shop open to generate the profits needed to pay for the cost of their machines and the travel to North Carolina (they did not gamble their life savings on a speculative venture). 

On the other hand, they did accept some risk: they were not sure that their experiments would succeed, and every flight brought the potential for a crash.  Still, they learned from their failures, and managed the associated risks successfully.  Their story has important lessons about effectively managing risks.




Tuesday, April 19, 2016

UAV Operations and Failures

Earlier this month, the FAA's Micro Unmanned Aircraft Systems Aviation Rulemaking Committee (ARC) issued its recommendations in a final report: http://www.faa.gov/uas/publications/media/Micro-UAS-ARC-FINAL-Report.pdf

The recommendations include classifying UAVs (drones) into four categories, based on the risk that they pose to people underneath them.  If the UAV fails, it will crash and could cause a serious injury. 

If the mass of the UAV is less than or equal to 250 grams, then it would be in Category 1, which would have no additional restrictions (beyond those already in place).

UAVs more likely to cause a serious injury would face more restrictions.  For instance, Category 2 UAVs "must maintain minimum set-off distances of 20 feet above people’s heads, or 10 feet laterally away from people, and may not operate so close to people as to create an undue hazard to those people."  Category 3 UAVs would not be allowed to fly over crowds or dense concentrations of people.  A Category 4 UAV, on the other hand, could do that if it complied with a documented, risk mitigation plan.

There are many interesting details about the ARC, its risk attitude, how the ARC developed its recommendations, and other factors.  In particular, the ARC did not consider the likelihood of a UAV failure or the likelihood that it would hit someone if it failed; it considered only the distribution of the consequence (the chance of a serious injury) if it hit someone:  "Specifically, the ARC recommends that a small UAS be permitted to conduct limited operations over people ... if that UAS presents a 30% or lower chance of causing [a serious] injury upon impact with a person."

Tuesday, January 26, 2016

Preparing for Disasters

The blizzard last weekend provided plenty examples of risk management: people in the path of the storm bought food and batteries, refueled their cars and trucks, and got ready to spend some time at home.

One cannot prevent a natural disaster, but one can try to prevent some of the potential problems that it could cause.  In New York City, the preparations included a sufficiently large force to clear streets (4,600 workers and more than 2,000 pieces of equipment, according to The Washington Post) and specific actions just before the storm (closing the transit system, which prevented buses from getting stuck in the snow and blocking snowplows).

On the other side of the country, officials are making contingency plans for a much worse disaster: an earthquake and tsunami in the Pacific Northwest that could kill thousands, leave many homeless, and disrupt the economy.  Oregon’s response plan, called the Cascadia Playbook, describes the system for moving personnel, equipment, and supplies into the area after the disaster and setting up medical facilities and shelters for the homeless.

In both cases, officials have used previous failures to make better plans.  In the 2010 Snowmageddon storms, the New York transit system stayed open, which led to stranded buses; this time they closed it.  The 2011 Japanese tsunami gave planners in the Pacific Northwest the opportunity to consider more accurately what would be needed.

Links:
Story about Washington, D.C., and New York City:
https://www.washingtonpost.com/local/after-a-wild-winter-weekend-a-difficult-commute-awaits/2016/01/24/3c987ebe-c302-11e5-b933-31c93021392a_story.html

Story about planning in the Pacific Northwest:
https://www.washingtonpost.com/national/health-science/is-a-massive-earthquaketsunami-overdo-along-the-northern-west-coast/2016/01/25/b423740c-bfce-11e5-bcda-62a36b394160_story.html

Photos from Washington: https://www.washingtonpost.com/local/time-to-pick-up-the-pieces-after-major-dc-area-snowstorm/2016/01/24/889fa7b4-c2c2-11e5-8965-0607e0e265ce_gallery.html?hpid=hp_no-name_photo-story-b%3Ahomepage%2Fstory

Monday, January 4, 2016

Keeping a Pipeline Safe

The risk associated with the 62-year-old pipelines under the Straits of Mackinac in northern Michigan was the subject of an article by Steve Friess in The Washington Post on Sunday.

After a different Enbridge pipeline in Michigan failed in 2010 and released about 20,000 barrels of oil, the state appointed a task force to study the oil pipelines throughout Michigan, include those under the Straits of Mackinac.  The task force report made four recommendations about the Straits pipelines and nine others for the whole state.  The task force recommended that the Straits pipelines should not transport heavy crude oil.  Enbridge has stated that the pipelines carry only light crude oil and light synthetic crude and natural gas liquids, including propane.  See, for instance, its Operational Reliability Plan.  The Enbridge website has more information about the pipelines and their plans to keep it safe; see http://www.enbridgeus.com/Line-5.aspx

Everyone agrees that a failure of the Straits pipelines could cause severe environmental damage. 
Enbridge, of course, also has a financial risk; they would lose revenue if the pipeline fails and has to be shutdown.  An Enbridge spokesperson stated, “Every day we’re out repairing pipelines and shutting down due to release, we’re not moving product. It’s in our interest as a pipeline company to keep it in the pipe.”

The Michigan Petroleum Pipeline Task Force website also has some interesting documents about the pipeline construction, including the 1953 engineering analysis (http://michigan.gov/documents/deq/Appendix_A.2_493980_7.pdf), which describes the selection of the location, the construction of the pipeline, and the analysis of the stresses involved.  In general, it is a good example of risk assessment and mitigation.  It acknowledges both the environmental and financial risks.  The pipeline elsewhere in Michigan has only one pipe, but two pipelines were used at the Straits, "for purposes of extra flexibility, extra strength, and a greater factor of safety against possible damage," according to this report.  If there are two pipes, then a leak in one pipe should release less oil, and the other pipe can continue to operate, which minimizes the financial and operational disruptions.  The report mentions the hazard from a ship's anchor and describes why this is unlikely in general and how the pipeline design will reduce this risk.  It also mentions that "any possible contamination of the waters caused by oil spillage from the pipeline crossing is considered remote in comparison to the amount and possibility of spillage from oil tankers."

This last point remains extremely relevant: given that people in Michigan use oil from Canada, all of the transportation options have risks, which the task force report acknowledged. 
For example, trains transporting oil had accidents in Quebec and Virginia.




Monday, August 24, 2015

Deciding to Save New Orleans

Ten years after Hurricane Katrina, the city of New Orleans has done much to mitigate the risk of a hurricane.  The August 22 issue of The Washington Post included an article by Chris Mooney (http://www.washingtonpost.com/sf/national/2015/08/21/the-next-big-one/) about an important decision that remains to be made: whether to use sediment diversion to protect the wetlands that protect New Orleans.  In addition to slowing the loss of wetlands, the advantages include a relatively low one-time cost and a potential economic value from sportsmen and tourists who would enjoy the wetlands.  The key disadvantage is the disruption to the local fishing industry.  A key uncertainties are whether the diversions will actually work because there are many factors that influence wetland restoration and the impact of the wetlands on the storm surge may be limited.

The decision-making process appears to be an analytic-deliberative one: a state advisory board has scientific experts, while fishermen have organized a group to oppose the diversions and (if necessary) block construction, and a state agency needs to make a decision before the end of the year.


Saturday, August 22, 2015

Mitigating the Risk of Equipment Maintenance

Earlier this month I led a course on engineering risk management for a group of engineers and managers at a manufacturing firm that does sheet metal work and makes a variety of air distribution systems and components.  They have numerous machines that use multiple sources of power, which makes equipment maintenance more challenging.  They use lockout and tagout (LOTO) procedures (https://www.osha.gov/SLTC/controlhazardousenergy/index.html) but were interested in a systematic procedure for managing the risk associated with equipment maintenance.  While covering the process of risk management, the associated activities, and the fundamentals of decision making, we discussed how they could apply these steps to make their equipment maintenance operations safer.  The discussion included the potential problems of their current lockout procedures.

The bottom line: establishing and documenting lockout and tagout (LOTO) procedures are useful steps, but they don't replace a systematic risk management process that assesses, analyzes, evaluates, mitigates, and monitors the risks of equipment maintenance.  Look for the potential problems, identify the root causes, put in place safeguards that prevent them, and have contingency plans that can react promptly to keep a problem from getting worse.

P.S. I would like to thank the IIE Training Center (http://www.iienet2.org/IIETrainingCenter/Default.aspx) for the opportunity to lead this course.  Please contact them if you're interested in a short course on engineering decision making and risk management.


Tuesday, July 14, 2015

When New Horizons Halted

As the New Horizons spacecraft flies by Pluto today, it is collecting and sending back to Earth data from its many sensors.  But this success almost didn't happen.

On Saturday, July 11, The Washington Post had an article about the crisis that occurred just a week earlier (July 4).

The story illustrates a couple of key ideas in decision making and risk management.

First, the loss of contact occurred because the spacecraft was programmed with a contingency plan: if something goes wrong, then go to safe mode: switch to the backup computer, turn off the main computer and other instruments, start a controlled spin to make navigation easier, and start transmitting on another frequency.  A contingency plan is a great way to manage risk.

Second, fixing this situation required the New Horizons operations team to manage an "issue," not a "risk," because the problem had already occurred (it was not a potential problem).

Finally, after diagnosing the problem and re-establishing contact with the spacecraft, the team had to make a key decision: whether to stick with the backup computer or switch back to the main computer (which had become overloaded, causing the crisis).  Here, they displayed some risk aversion (not surprising considering the one-shot chance to observe Pluto): they went back to the main computer because they "trusted [it], knew its quirks, had tested it repeatedly."

Congratulations to all of the engineers, scientists, and technicians who designed, built, and operate the New Horizons spacecraft!


Saturday, July 4, 2015

Managing the Risk of Fireworks


The Fourth of July is a great opportunity to talk about risk management.  Setting off fireworks at home is a popular entertainment, but it is dangerous, as the press reminds us every year: http://www.washingtonpost.com/blogs/govbeat/wp/2015/07/03/here-are-photos-of-all-the-horrific-ways-fireworks-can-maim-or-kill-you/

After assessing the risk, how can one mitigate it?  Here are the basic approaches:  (1) avoiding the risk by abandoning the planned action or eliminating the root cause or the consequences, (2) reducing the likelihood of the root cause or decreasing its consequences by modifying the planned action or performing preventive measures, (3) transferring the risk to another organization, or (4) assuming (accepting) the risk without mitigating it.

How would these apply to fireworks at home?
1. Avoid the risk: don't do it.  Go to a fireworks show or watch one on TV or find something else to do.
2. Reduce the risk: stick to sparklers and party poppers and follow safety guidelines (like these from http://www.cpsc.gov/safety-education/safety-education-centers/fireworks/): keep fireworks away from brush and other substances that can burn, don't let children play with fireworks, keep a bucket of water handy to douse the fireworks or anything that catches fire.
3. Transfer the risk: hire a professional (or other trained expert) to do a fireworks show at your place, or let a neighbor run the show while you and your family watch from a safe distance.
4. Accept the risk: indulge in the tradition!

The relative desirability of these options depends upon how much you like fireworks and how much risk you're willing to accept.

Have a Happy Fourth of July!

Thursday, May 14, 2015

Columbia's Final Mission

In my Engineering Decision Making and Risk Management class last week we discussed the space shuttle Columbia accident using the case study Columbia's Final Mission by Michael Roberto et al.  (http://www.hbs.edu/faculty/Pages/item.aspx?num=32441).
The case study highlights topics related to making decisions in the presence of ambiguous threats, including the nature of the response, organizational culture, and accountability.
It also discusses the results of the Columbia Accident Investigation Board (http://www.nasa.gov/columbia/home/CAIB_Vol1.html).
When discussing the case in class, a key activity is re-enacting a critical Mission Management Team (MMT) meeting, which gives students a chance to identify opportunities to improve decision making. 

My class also discussed the design of warning systems, risk management, risk communication, different decision-making processes, and problems in decision making, all of which reinforced the material in the textbook (http://www.isr.umd.edu/~jwh2/textbook/index.html).

We concluded that the structure of the MMT made effective risk communication difficult and key NASA engineers and managers failed to describe the risk to those in charge.
Moreover, the decision-makers used a decision-making process that prematurely accepted a claim that the foam strike would cause only a turn-around problem, not a safety-of-flight issue, and this belief created another barrier to those who were concerned about the safety of the astronauts and wanted more information.

Failures such as the Columbia accident are opportunities to learn, and case studies are a useful way to record and transmit information about failures, which is essential to learning.
We learned how ineffective risk communication and poor decision-making processes can lead to disaster.


Monday, May 4, 2015

Nepal's Earthquake Risk

The recent earthquake in Nepal, one of the poorest countries in the world, is a horrible disaster.  The potential for injuries, death, and destruction were well-known.  Coincidentally, last week, a student in my engineering decision making and risk management course submitted an assignment summarizing a 2000 report on this topic written by experts from Nepal and California.

The report is Amod M. Dixit, Laura R. Dwelley-Samant, Mahesh Nakarmi, Shiva B. Pradhanang, and Brian E. Tucker. "The Kathmandu Valley earthquake risk management project: an evaluation." Asia Disaster Preparedness Centre, 2000.
It can be found at http://www.iitk.ac.in/nicee/wcee/article/0788.pdf.
The report stated that "a devastating earthquake is inevitable in the long term and likely in the near future."  Indeed, the report cited data that earthquakes with magnitude of more than 8 on the Richter scale occur in that region approximately every 81 years.  The Nepal-Bihar earthquake (magnitude 8.2) was in 1934 (81 years ago).

The report described various factors that increase the earthquake risk in Nepal, including the high probability of liquefaction due to local soil conditions, poorly constructed dwellings that are more likely to fail and "a tendency in the general population to ignore the earthquake hazard due to more immediate needs."

The project described by the report emphasized awareness-raising as part of creating institutions that would work to reduce the earthquake risk.  Increasing awareness depended upon sharing information about the earthquake risk, including estimates about the potential loss of life.  The authors reported that this risk communication "did not create any panic in the population. It rather made a larger part of the society wanting to improve the situation. This leads us to believe that the traditional belief of possible generation of panic should not be used as an excuse for not releasing information on risk."

In addition to the report, additional information about the Kathmandu Valley Earthquake Risk Management project can be found online at
http://geohaz.org/projects/kathmandu.html.

Saturday, February 21, 2015

Mitigating the risks from small UAVs

Earlier this week, the FAA released proposed rules for operators of small UAVs (drones).  Small UAVs are those that weigh less than 55 pounds. 

From a risk management perspective, the rules propose a variety of preventive actions: fly only in daylight, do not fly over people, do not fly in bad weather, fly below 500 feet altitude, fly at speeds less than 100 miles per hour, and do not fly in airport flight paths and restricted airspace areas.  Most of the rules are meant to prevent accidents by reducing the likelihood of losing control, colliding with other aircraft, and crashing into third persons on the ground. 

The rules do not propose any contingency plans to minimize risk when the operator loses control of the UAV.  Indeed, perhaps the only ones that can be considered are fanciful ones like a "disassemble" command that makes the UAV divide into smaller pieces that would cause less damage on impact or deploying airbags like those on the Mars Pathfinder.  A more reasonable contingency might be a siren and flashing light that are activated to warn those nearby when the UAV begins to crash (like shouting "fore" when a golfer drives a ball towards unsuspecting bystanders).