Conditional Decisions Model

 

Humans use many deductive reasoning methods to conclude what they believe is correct or logical. However, not all types of syllogisms are created equal. Some can cause common occurrences of biases and fallacies without humans realizing them. Some syllogisms require validation with counterexamples but attempting to prove a syllogism invalid takes up more energy than creating a valid one. Therefore, the key to developing a system that can couple with human operators using conditional reasoning is to address the shortcomings of human thinking, serve as a safety net, and ultimately minimize potential errors in human decisions and cognitive overload.

Humans’ common practices

Human deductive reasoning shortcoming lies within the four types of conditional syllogisms - the modus ponens (MP), the affirmation of the consequent (AC), denial of the antecedent (DA), and the modus tollens (MT) (Schroyen et al., 2001).

Each decision a human makes can fit into some form of reasoning syllogism. Decisions range from simple decisions about whether to bring an umbrella when leaving the house to complicated ones, such as whether a patient is ready for discharge at an intensive care unit (ICU).

Because AC and DA are two invalid syllogisms that are hard to detect wrong, humans might draw conclusions based on AC and DA without realizing it. Additionally, research done by Schroyens et al. (2001) found that more people endorse affirmations than denials; that is, more MPs are made than MTs, and more ACs are made than DAs. Because DA and MT require counterexamples to prove right or wrong, an extra cognitive workload is placed on humans and violates efficiency. However, only focusing on affirmations can lead to incomplete considerations and ultimately erroneous decisions.

In the case of ICU discharge

When determining ICU discharge, physicians use physiological metrics and analyze them through some form of a conditional syllogism, including invalid ones. Chan et al. (2012) found that only a small number of physicians surveyed utilize the provided guidelines when deciding on ICU discharge. Even worse, non-medical related factors such as bed availability could further affect their decision. Physicians might rule out a specific disease without using the guidelines because one physiological parameter is within the normal range. For example, if the determination is based on the statement that if a patient has a heart attack, then this number is over 100, concluding the patient is unsafe for discharge. However, under time pressure and other physical or mental demands such as fatigue, physicians might also conclude using AC where since this number is under 100, then the patient must not have a heart attack; therefore safe for discharge. Such a conclusion happens to be invalid; physicians failed to consider the counterexamples of patients who had a heart attack but had a number lower than 100.

Recommendation

Therefore, to arrive at logical, right conclusions, both validation and falsification need to be considered. The proposal suggested by Schroyens et al. (2001) where one should always assume the suggested result is false and search for counterexamples to prove the conclusion right should be considered.

For example, the computer could provide counterexamples by composing a list of cases with a heart attack and a number lower than 100. Machines could then suggest a list of values that satisfies the MP syllogism from the opposite angle - if another number is within range, then there is no heart attack for the physicians to make a complete decision.

Although humans are capable of searching for MP syllogisms from the opposite angle and find counterexamples, as Schroyens et al. (2001) discussed, the negation process to falsify a theory is both time-consuming and cognitively demanding because humans need to check the possibilities of predictions against all relevant prior knowledge in their long-term memory. Machines should be used for such tasks, especially considering the vast amount of data machines can access. In other words, the system should cross-check human decisions by performing the process of negation on human decisions.

Aside from falsifying human decisions, machines also function to assess the operator’s cognitive workload when coupled with humans. Much research on cognitive workload involves using fMRI and physiological measurements. However, wearing such devices while working may interfere with work and has yet been transferred to real-life situations. Therefore, Byrne (2011) proposed primary task intensity as a measurement for the cognitive workload. Since machines assist operators in falsification, machines can also be programmed to measure the number and difficulty of humans’ decisions, giving a real-time cognitive workload measure.

A high workload measure can serve as a reminder for physicians to assess their decisions more critically. The cognitive workload is important when considering reasoning abilities because working memory, being a part of cognition, controls a wide range of cognitive abilities, including problem-solving and prediction. Therefore, a decrease in cognitive workload is expected to accompany poor judgments and the possibility of making more invalid syllogisms and a failure to detect them.


References

Byrne, A. (2011). Measurement of Mental Workload in Clinical Medicine: A Review Study. Anesthesiology and Pain Medicine, 1(2), 90–94. https://doi.org/10.5812/kowsar.22287523.2045

Chan, C. W., Farias, V. F., Bambos, N., & Escobar, G. J. (2012). Optimizing Intensive Care Unit Discharge Decisions with Patient Readmissions. Operations Research, 60(6), 1323–1341. https://doi.org/10.1287/opre.1120.1105

Schroyens, W. J., Schaeken, W., & D’Ydewalle, G. (2001). The processing of negations in conditional reasoning: A meta-analytic case study in mental model and/or mental logic theory. Thinking & Reasoning, 7(2), 121–172. https://doi.org/10.1080/13546780042000091