Monday, November 23, 2020

Tuning coronavirus exposure warnings

 In The Washington Post, Geoffrey A. Fowler described a smartphone app that alerts a user when the user has been in close contact with someone who reports a positive COVID-19 test result.  The app uses Bluetooth technology to track which other smartphones one has been near (but not the physical locations of the contacts).

 Fowler's article also mentioned the problem of false positives, a typical problem in the design of any warning system.  The system is meant to alert a user if they have had a close contact with someone who tests positive.  To do that, the app must define a "close contact" based on the signal strength (stronger = closer) and the duration of the contact.  If these parameters are set too "low" so that weak signals and short encounters are considered "close contacts," then a user may get too many alerts (false positives); on the other hand, if the parameters are set too "high," then a user may not receive an alert about a risky encounter (a false negative).  

The article quoted Jenny Wanger of the Linux Foundation Public Health, who said, "We are working as a community to optimize it and to figure out how to get those settings to be in the right place so that we do balance the risk of false positives with the getting notifications out to people who are at risk."

Note that an alert here is not a positive test result for the user, only a warning that one was near someone with a positive test result and thus may have been exposed.  The costs of false positives and false negatives are subjective, of course.  At this point in the pandemic, a false positive, which may cause a user to quarantine or limit his activities unnecessarily, may be more costly that a false negative for someone who is taking precautions while doing typical activities and is likely having many brief, low-risk encounters.  This type of user may prefer to know only about really close contacts that have a higher risk of transmission.

The opposite may be true for someone who is significantly at risk for becoming seriously ill and has very few contacts in a typical day.  Then, a more sensitive (but less specific) system may be more appropriate.

Thus, it would be useful for users to have the ability to set the warning threshold based on their risk preferences, similar to the way financial advisors ask investors about their risk tolerance.



Tuesday, May 26, 2020

Decision making in the Ideation Toolkit

The Ideation Toolkit (from Keen Engineering Unleashed) is a collection of information that engineering students can use as they develop an idea for a product.

Although this toolkit includes the Analytic Hierarchy Process (AHP), I suggest that educators use multi-attribute utility theory (MAUT) instead of AHP.  I have been teaching engineering decision making for many years, and, although my textbook includes both AHP and MAUT, I have found that engineering students find MAUT easier to adopt and use correctly.  Using AHP in a rational way is more difficult than it looks, whereas MAUT is more straightforward for making decisions when the alternatives have multiple criteria that need to be considered (e.g., cost, strength, durability, etc.).

Wednesday, May 6, 2020

Scary Spider Chart

Spider chart.  Creator: redacted.
The spider chart shown in this post came from a dissertation that studied multi-agent systems. A spider chart is also known as a radar chart, and writers use it to graph multivariate data.  In a typical example, each alternative has a polygon that connects the points that represent its performance on multiple measures, with one point on one spoke for each measure.  It can be effective to show how one alternative performs well on one or more measures and another alternative performs well on another set of measures.  In that case, the polygons for these alternatives will have different shapes.
Conversely, the polygons for two alternatives will be very similar if the performance of those alternatives is similar across the different measures.

Unfortunately, the spider chart shown here has reversed the typical use.  There are two alternatives (adaptive risk and fixed risk) that were tested under three scenarios (low, medium, and high workload).  In this chart, there are six spokes, one for each combination; a typical chart would have six polygons.  Instead, there are three polygons, one for each measure: profit, completed percentage, and failed percentage.  The last two measures always add to 100%, so one of them is redundant.

Thus, there are only twelve useful data points (two measures for six combinations). The data-to-ink ratio is very low.  Given the small amount of data, a simple table may be the best way to convey this information.

The purpose of this spider chart is to show how the two alternatives compare on the performance measures, but this chart makes that comparison very difficult. A reader normally relies upon the slope of a curve (either positive or negative) to determine how performance is changing, but that will not work here because the different scenarios have different orientations and one performance measure is the complement of the other. 

Because there are effectively two performance measures, a two-dimensional scatter plot (with appropriate labels) would have been appropriate.  The second chart (which I created using the same data) is a possibility; this makes the change from fixed risk to adaptive risk more clear, but it still has a low data-to-ink ratio.
Two-dimensional scatter plot.  Creator: Jeffrey W. Herrmann.



Saturday, April 4, 2020

Which face mask to make?

During the coronavirus pandemic, face masks have become an important tool for preventing transmission of the virus and protecting health-care workers, but commercial face mask supplies are low. Members of the public are mobilizing to make face masks, but there are many different designs and options that have different purposes. I've created a web page to help one determine which face mask to make.

(Image credit: UnityPoint Health, Cedar Rapids, Iowa.)