SRK Taxonomy Applied to Pilot-Autopilot Interaction

 

Autopilot takes over cognitively demanding work in an ideal world, increases efficiency, and decreases human errors. However, our current technological advancement can yet complete an autonomous position in the world we live in, a complex and dynamic system. Because current autopilot systems lack reasoning processing ability, until technological advancement can reach a high level of intricacy with machine learning and artificial intelligence where the machine itself can advance and model the world on its own, it is best to have autopilot operation restrained and always under human control rather than giving it complete autonomy. 

Rasmussen’s SRK taxonomy

Based on Jen Rasmussen’s model, human behavior is divided into three levels: skill-based, rule-based, and knowledge-based (Cummings, 2014). According to Fleming and Pritchett (2016), skill-based behavior is the sensory-motor performance of the behavior that is smooth and often automated. Skill-based actions are mainly what automatic machines perform today because they are rudimentary, meaning the action itself involves little cognitive skills. For example, turning on a light, the act of maneuvering the steering wheel, or pulling down the lever inside a cockpit. Rule-based behaviors are actions that are routine and governed by rules (Fleming & Pritchett, 2016). According to Cummings (2014), rule-based actions are what pilots spend the most learning and trained on. For example, what actions to perform when this light is on. With the advancement of technology and the complexity of mathematical algorithms to program automation, many machines can now follow the “if-then” statement and take actions based on data. The third type of behavior is knowledge-based. Knowledge-based actions are defined as problem-solving skills (Fleming & Pritchett, 2016). These are actions that human operators excel at, for example, maintaining situational awareness and responding to any unexpected events. These actions are particularly difficult for automated machines.

Human strengths and weaknesses

Humans have physiological and psychological limitations. As Cummings (2014) points out, it is challenging for pilots to maintain focused attention for more than thirty minutes. However, sustained attention is needed for piloting to adjust the flight path and maintain the same speed and altitude using micro-movements when not relying on auto-piloting. Emotions are psychological aspects of human limitation. Humans are subject to emotions and biases, and general inconsistency (Haight, 2007). Any stressor can alter human psychology, such as decision-making processes and physiology, such as shaking hands. Even without a stressor, two trained pilots can interpret a gauge reading differently. Therefore, performing skill-based action is not humans’ strength. As mentioned, humans need a large amount of training to master rule-based behaviors. Eventually, humans need to achieve the automatic aspect of rule-based behaviors, knowing precisely what actions to perform when seeing the light. Other portions of the training involve accounting for unexpected scenarios, which is human’s strength. Lastly, because of human adaptability, flexibility, and reasoning processing, knowledge-based actions are what humans excel at.

Machine strengths and weaknesses

Our technology's automated machines' limitations currently allow happening to align with human operators' strengths: adaptability and flexibility. Because automatic machines are programmed, they are imperfect when in the world of uncertainty. It lacks judgment when the situation differs from what was programmed and cannot make any adjustments to itself based on the situation. This makes knowledge-based behaviors impossible for automatic machines because knowledge-based actions are defined as applying knowledge and analytical skills to combat uncertainty (Cumming, 2014). Rule-based actions are also challenging because it involves the uncertainty aspect. Humans form insights through the retrieval of various strongly and weakly associated knowledge when solving problems. Humans can yet understand how insights characterized develop because the mental processes happen internally and only occur after some incubation period (Carpenter, 2020). It is not yet possible to describe such intelligence into machines using mathematical algorithms. Therefore, spontaneous problem-solving ability cannot be learned by machines. On the other hand, machines can maintain consistency, predictability, efficiency, fatigue-resistant, and reliability in a non-judgemental way as required for skill-based actions.

Conclusion

The three SRK taxonomy levels can be viewed as a spectrum; it can be observed that machines function best at the left end of the spectrum, performing skill-based tasks and some rule-based tasks. However, when it comes to uncertainty embedded into the rule-based tasks to the right of the range, human operators come in. They account for unpredictability using their knowledge and problem-solving skills. As a result, in aviation and any other industries where the debate of how much automation versus human operators control in the process, it should collaborate two parties. In autopilot, because machines excel at fine, sustained motor movements, they are used to reading and flying the aircraft as programmed, which was previously calculated to be the best flight path. When the aircraft encounters unexpected scenarios, humans need to come in and utilize their adaptability and resolve any issues. As machine learning and artificial intelligence are still in their infancy, can yet be implemented into operation, autonomous system operations should collaborate with human operators. 

References

Carpenter, W. (2020). The Aha! Moment: The Science Behind Creative Insights. Toward Super-Creativity - Improving Creativity in Humans, Machines, and Human - Machine Collaborations, 1–13. https://doi.org/10.5772/intechopen.84973

Cummings, M. (2014). Man versus Machine or Man + Machine? IEEE Intelligent Systems, 29(5), 62–69. https://doi.org/10.1109/mis.2014.87

Fleming, E., & Pritchett, A. (2016). SRK as a framework for the development of training for effective interaction with multi-level automation. Cognition, Technology & Work, 18(3), 511–528. https://doi.org/10.1007/s10111-016-0376-0

Haight, J. M. (2007). Automated Control Systems Do they reduce human error and incidents? Professional Safety, 52(5), 20–27. https://aeasseincludes.assp.org/professionalsafety/pastissues/052/05/20_HaightFeature_May2007.pdf